netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 bpf-next 0/9] libbpf: add bpf_link and tracing attach APIs
@ 2019-07-01 23:58 Andrii Nakryiko
  2019-07-01 23:58 ` [PATCH v5 bpf-next 1/9] libbpf: make libbpf_strerror_r agnostic to sign of error Andrii Nakryiko
                   ` (9 more replies)
  0 siblings, 10 replies; 19+ messages in thread
From: Andrii Nakryiko @ 2019-07-01 23:58 UTC (permalink / raw)
  To: andrii.nakryiko, bpf, netdev, ast, daniel, kernel-team, yhs
  Cc: Andrii Nakryiko

This patchset adds the following APIs to allow attaching BPF programs to
tracing entities:
- bpf_program__attach_perf_event for attaching to any opened perf event FD,
  allowing users full control;
- bpf_program__attach_kprobe for attaching to kernel probes (both entry and
  return probes);
- bpf_program__attach_uprobe for attaching to user probes (both entry/return);
- bpf_program__attach_tracepoint for attaching to kernel tracepoints;
- bpf_program__attach_raw_tracepoint for attaching to raw kernel tracepoint
  (wrapper around bpf_raw_tracepoint_open);

This set of APIs makes libbpf more useful for tracing applications.

All attach APIs return abstract struct bpf_link that encapsulates logic of
detaching BPF program. See patch #2 for details. bpf_assoc was considered as
an alternative name for this opaque "handle", but bpf_link seems to be
appropriate semantically and is nice and short.

Pre-patch #1 makes internal libbpf_strerror_r helper function work w/ negative
error codes, lifting the burder off callers to keep track of error sign.
Patch #2 adds bpf_link abstraction.
Patch #3 adds attach_perf_event, which is the base for all other APIs.
Patch #4 adds kprobe/uprobe APIs.
Patch #5 adds tracepoint API.
Patch #6 adds raw_tracepoint API.
Patch #7 converts one existing test to use attach_perf_event.
Patch #8 adds new kprobe/uprobe tests.
Patch #9 converts some selftests currently using tracepoint to new APIs.

v4->v5:
- typo and small nits (Yonghong);
- validate pfd in attach_perf_event (Yonghong);
- parse_uint_from_file fixes (Yonghong);
- check for malloc failure in attach_raw_tracepoint (Yonghong);
- attach_probes selftests clean up fixes (Yonghong);
v3->v4:
- proper errno handling (Stanislav);
- bpf_fd -> prog_fd (Stanislav);
- switch to fprintf (Song);
v2->v3:
- added bpf_link concept (Daniel);
- didn't add generic bpf_link__attach_program for reasons described in [0];
- dropped Stanislav's Reviewed-by from patches #2-#6, in case he doesn't like
  the change;
v1->v2:
- preserve errno before close() call (Stanislav);
- use libbpf_perf_event_disable_and_close in selftest (Stanislav);
- remove unnecessary memset (Stanislav);

[0] https://lore.kernel.org/bpf/CAEf4BzZ7EM5eP2eaZn7T2Yb5QgVRiwAs+epeLR1g01TTx-6m6Q@mail.gmail.com/

Andrii Nakryiko (9):
  libbpf: make libbpf_strerror_r agnostic to sign of error
  libbpf: introduce concept of bpf_link
  libbpf: add ability to attach/detach BPF program to perf event
  libbpf: add kprobe/uprobe attach API
  libbpf: add tracepoint attach API
  libbpf: add raw tracepoint attach API
  selftests/bpf: switch test to new attach_perf_event API
  selftests/bpf: add kprobe/uprobe selftests
  selftests/bpf: convert existing tracepoint tests to new APIs

 tools/lib/bpf/libbpf.c                        | 367 ++++++++++++++++++
 tools/lib/bpf/libbpf.h                        |  21 +
 tools/lib/bpf/libbpf.map                      |   8 +-
 tools/lib/bpf/str_error.c                     |   2 +-
 .../selftests/bpf/prog_tests/attach_probe.c   | 166 ++++++++
 .../bpf/prog_tests/stacktrace_build_id.c      |  55 +--
 .../bpf/prog_tests/stacktrace_build_id_nmi.c  |  31 +-
 .../selftests/bpf/prog_tests/stacktrace_map.c |  43 +-
 .../bpf/prog_tests/stacktrace_map_raw_tp.c    |  15 +-
 .../selftests/bpf/progs/test_attach_probe.c   |  55 +++
 10 files changed, 664 insertions(+), 99 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/attach_probe.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_attach_probe.c

-- 
2.17.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v5 bpf-next 1/9] libbpf: make libbpf_strerror_r agnostic to sign of error
  2019-07-01 23:58 [PATCH v5 bpf-next 0/9] libbpf: add bpf_link and tracing attach APIs Andrii Nakryiko
@ 2019-07-01 23:58 ` Andrii Nakryiko
  2019-07-01 23:58 ` [PATCH v5 bpf-next 2/9] libbpf: introduce concept of bpf_link Andrii Nakryiko
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Andrii Nakryiko @ 2019-07-01 23:58 UTC (permalink / raw)
  To: andrii.nakryiko, bpf, netdev, ast, daniel, kernel-team, yhs
  Cc: Andrii Nakryiko

It's often inconvenient to switch sign of error when passing it into
libbpf_strerror_r. It's better for it to handle that automatically.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Reviewed-by: Stanislav Fomichev <sdf@google.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 tools/lib/bpf/str_error.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/lib/bpf/str_error.c b/tools/lib/bpf/str_error.c
index 00e48ac5b806..b8064eedc177 100644
--- a/tools/lib/bpf/str_error.c
+++ b/tools/lib/bpf/str_error.c
@@ -11,7 +11,7 @@
  */
 char *libbpf_strerror_r(int err, char *dst, int len)
 {
-	int ret = strerror_r(err, dst, len);
+	int ret = strerror_r(err < 0 ? -err : err, dst, len);
 	if (ret)
 		snprintf(dst, len, "ERROR: strerror_r(%d)=%d", err, ret);
 	return dst;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v5 bpf-next 2/9] libbpf: introduce concept of bpf_link
  2019-07-01 23:58 [PATCH v5 bpf-next 0/9] libbpf: add bpf_link and tracing attach APIs Andrii Nakryiko
  2019-07-01 23:58 ` [PATCH v5 bpf-next 1/9] libbpf: make libbpf_strerror_r agnostic to sign of error Andrii Nakryiko
@ 2019-07-01 23:58 ` Andrii Nakryiko
  2019-07-01 23:58 ` [PATCH v5 bpf-next 3/9] libbpf: add ability to attach/detach BPF program to perf event Andrii Nakryiko
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Andrii Nakryiko @ 2019-07-01 23:58 UTC (permalink / raw)
  To: andrii.nakryiko, bpf, netdev, ast, daniel, kernel-team, yhs
  Cc: Andrii Nakryiko

bpf_link is an abstraction of an association of a BPF program and one of
many possible BPF attachment points (hooks). This allows to have uniform
interface for detaching BPF programs regardless of the nature of link
and how it was created. Details of creation and setting up of a specific
bpf_link is handled by corresponding attachment methods
(bpf_program__attach_xxx) added in subsequent commits. Once successfully
created, bpf_link has to be eventually destroyed with
bpf_link__destroy(), at which point BPF program is disassociated from
a hook and all the relevant resources are freed.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Stanislav Fomichev <sdf@google.com>
---
 tools/lib/bpf/libbpf.c   | 17 +++++++++++++++++
 tools/lib/bpf/libbpf.h   |  4 ++++
 tools/lib/bpf/libbpf.map |  3 ++-
 3 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 6e6ebef11ba3..455795e6f8af 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -3941,6 +3941,23 @@ int bpf_prog_load_xattr(const struct bpf_prog_load_attr *attr,
 	return 0;
 }
 
+struct bpf_link {
+	int (*destroy)(struct bpf_link *link);
+};
+
+int bpf_link__destroy(struct bpf_link *link)
+{
+	int err;
+
+	if (!link)
+		return 0;
+
+	err = link->destroy(link);
+	free(link);
+
+	return err;
+}
+
 enum bpf_perf_event_ret
 bpf_perf_event_read_simple(void *mmap_mem, size_t mmap_size, size_t page_size,
 			   void **copy_mem, size_t *copy_size,
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index d639f47e3110..5082a5ebb0c2 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -165,6 +165,10 @@ LIBBPF_API int bpf_program__pin(struct bpf_program *prog, const char *path);
 LIBBPF_API int bpf_program__unpin(struct bpf_program *prog, const char *path);
 LIBBPF_API void bpf_program__unload(struct bpf_program *prog);
 
+struct bpf_link;
+
+LIBBPF_API int bpf_link__destroy(struct bpf_link *link);
+
 struct bpf_insn;
 
 /*
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 2c6d835620d2..3cde850fc8da 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -167,10 +167,11 @@ LIBBPF_0.0.3 {
 
 LIBBPF_0.0.4 {
 	global:
+		bpf_link__destroy;
+		bpf_object__load_xattr;
 		btf_dump__dump_type;
 		btf_dump__free;
 		btf_dump__new;
 		btf__parse_elf;
-		bpf_object__load_xattr;
 		libbpf_num_possible_cpus;
 } LIBBPF_0.0.3;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v5 bpf-next 3/9] libbpf: add ability to attach/detach BPF program to perf event
  2019-07-01 23:58 [PATCH v5 bpf-next 0/9] libbpf: add bpf_link and tracing attach APIs Andrii Nakryiko
  2019-07-01 23:58 ` [PATCH v5 bpf-next 1/9] libbpf: make libbpf_strerror_r agnostic to sign of error Andrii Nakryiko
  2019-07-01 23:58 ` [PATCH v5 bpf-next 2/9] libbpf: introduce concept of bpf_link Andrii Nakryiko
@ 2019-07-01 23:58 ` Andrii Nakryiko
  2019-07-01 23:58 ` [PATCH v5 bpf-next 4/9] libbpf: add kprobe/uprobe attach API Andrii Nakryiko
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Andrii Nakryiko @ 2019-07-01 23:58 UTC (permalink / raw)
  To: andrii.nakryiko, bpf, netdev, ast, daniel, kernel-team, yhs
  Cc: Andrii Nakryiko

bpf_program__attach_perf_event allows to attach BPF program to existing
perf event hook, providing most generic and most low-level way to attach BPF
programs. It returns struct bpf_link, which should be passed to
bpf_link__destroy to detach and free resources, associated with a link.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Reviewed-by: Stanislav Fomichev <sdf@google.com>
---
 tools/lib/bpf/libbpf.c   | 63 ++++++++++++++++++++++++++++++++++++++++
 tools/lib/bpf/libbpf.h   |  3 ++
 tools/lib/bpf/libbpf.map |  1 +
 3 files changed, 67 insertions(+)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 455795e6f8af..bcaa294f819d 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -32,6 +32,7 @@
 #include <linux/limits.h>
 #include <linux/perf_event.h>
 #include <linux/ring_buffer.h>
+#include <sys/ioctl.h>
 #include <sys/stat.h>
 #include <sys/types.h>
 #include <sys/vfs.h>
@@ -3958,6 +3959,68 @@ int bpf_link__destroy(struct bpf_link *link)
 	return err;
 }
 
+struct bpf_link_fd {
+	struct bpf_link link; /* has to be at the top of struct */
+	int fd; /* hook FD */
+};
+
+static int bpf_link__destroy_perf_event(struct bpf_link *link)
+{
+	struct bpf_link_fd *l = (void *)link;
+	int err;
+
+	err = ioctl(l->fd, PERF_EVENT_IOC_DISABLE, 0);
+	if (err)
+		err = -errno;
+
+	close(l->fd);
+	return err;
+}
+
+struct bpf_link *bpf_program__attach_perf_event(struct bpf_program *prog,
+						int pfd)
+{
+	char errmsg[STRERR_BUFSIZE];
+	struct bpf_link_fd *link;
+	int prog_fd, err;
+
+	if (pfd < 0) {
+		pr_warning("program '%s': invalid perf event FD %d\n",
+			   bpf_program__title(prog, false), pfd);
+		return ERR_PTR(-EINVAL);
+	}
+	prog_fd = bpf_program__fd(prog);
+	if (prog_fd < 0) {
+		pr_warning("program '%s': can't attach BPF program w/o FD (did you load it?)\n",
+			   bpf_program__title(prog, false));
+		return ERR_PTR(-EINVAL);
+	}
+
+	link = malloc(sizeof(*link));
+	if (!link)
+		return ERR_PTR(-ENOMEM);
+	link->link.destroy = &bpf_link__destroy_perf_event;
+	link->fd = pfd;
+
+	if (ioctl(pfd, PERF_EVENT_IOC_SET_BPF, prog_fd) < 0) {
+		err = -errno;
+		free(link);
+		pr_warning("program '%s': failed to attach to pfd %d: %s\n",
+			   bpf_program__title(prog, false), pfd,
+			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+		return ERR_PTR(err);
+	}
+	if (ioctl(pfd, PERF_EVENT_IOC_ENABLE, 0) < 0) {
+		err = -errno;
+		free(link);
+		pr_warning("program '%s': failed to enable pfd %d: %s\n",
+			   bpf_program__title(prog, false), pfd,
+			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+		return ERR_PTR(err);
+	}
+	return (struct bpf_link *)link;
+}
+
 enum bpf_perf_event_ret
 bpf_perf_event_read_simple(void *mmap_mem, size_t mmap_size, size_t page_size,
 			   void **copy_mem, size_t *copy_size,
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 5082a5ebb0c2..1bf66c4a9330 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -169,6 +169,9 @@ struct bpf_link;
 
 LIBBPF_API int bpf_link__destroy(struct bpf_link *link);
 
+LIBBPF_API struct bpf_link *
+bpf_program__attach_perf_event(struct bpf_program *prog, int pfd);
+
 struct bpf_insn;
 
 /*
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 3cde850fc8da..756f5aa802e9 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -169,6 +169,7 @@ LIBBPF_0.0.4 {
 	global:
 		bpf_link__destroy;
 		bpf_object__load_xattr;
+		bpf_program__attach_perf_event;
 		btf_dump__dump_type;
 		btf_dump__free;
 		btf_dump__new;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v5 bpf-next 4/9] libbpf: add kprobe/uprobe attach API
  2019-07-01 23:58 [PATCH v5 bpf-next 0/9] libbpf: add bpf_link and tracing attach APIs Andrii Nakryiko
                   ` (2 preceding siblings ...)
  2019-07-01 23:58 ` [PATCH v5 bpf-next 3/9] libbpf: add ability to attach/detach BPF program to perf event Andrii Nakryiko
@ 2019-07-01 23:58 ` Andrii Nakryiko
  2019-07-03 12:39   ` Daniel Borkmann
  2019-07-01 23:58 ` [PATCH v5 bpf-next 5/9] libbpf: add tracepoint " Andrii Nakryiko
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 19+ messages in thread
From: Andrii Nakryiko @ 2019-07-01 23:58 UTC (permalink / raw)
  To: andrii.nakryiko, bpf, netdev, ast, daniel, kernel-team, yhs
  Cc: Andrii Nakryiko

Add ability to attach to kernel and user probes and retprobes.
Implementation depends on perf event support for kprobes/uprobes.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Reviewed-by: Stanislav Fomichev <sdf@google.com>
---
 tools/lib/bpf/libbpf.c   | 169 +++++++++++++++++++++++++++++++++++++++
 tools/lib/bpf/libbpf.h   |   7 ++
 tools/lib/bpf/libbpf.map |   2 +
 3 files changed, 178 insertions(+)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index bcaa294f819d..7b6142408b15 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -4021,6 +4021,175 @@ struct bpf_link *bpf_program__attach_perf_event(struct bpf_program *prog,
 	return (struct bpf_link *)link;
 }
 
+/*
+ * this function is expected to parse integer in the range of [0, 2^31-1] from
+ * given file using scanf format string fmt. If actual parsed value is
+ * negative, the result might be indistinguishable from error
+ */
+static int parse_uint_from_file(const char *file, const char *fmt)
+{
+	char buf[STRERR_BUFSIZE];
+	int err, ret;
+	FILE *f;
+
+	f = fopen(file, "r");
+	if (!f) {
+		err = -errno;
+		pr_debug("failed to open '%s': %s\n", file,
+			 libbpf_strerror_r(err, buf, sizeof(buf)));
+		return err;
+	}
+	err = fscanf(f, fmt, &ret);
+	if (err != 1) {
+		err = err == EOF ? -EIO : -errno;
+		pr_debug("failed to parse '%s': %s\n", file,
+			libbpf_strerror_r(err, buf, sizeof(buf)));
+		fclose(f);
+		return err;
+	}
+	fclose(f);
+	return ret;
+}
+
+static int determine_kprobe_perf_type(void)
+{
+	const char *file = "/sys/bus/event_source/devices/kprobe/type";
+
+	return parse_uint_from_file(file, "%d\n");
+}
+
+static int determine_uprobe_perf_type(void)
+{
+	const char *file = "/sys/bus/event_source/devices/uprobe/type";
+
+	return parse_uint_from_file(file, "%d\n");
+}
+
+static int determine_kprobe_retprobe_bit(void)
+{
+	const char *file = "/sys/bus/event_source/devices/kprobe/format/retprobe";
+
+	return parse_uint_from_file(file, "config:%d\n");
+}
+
+static int determine_uprobe_retprobe_bit(void)
+{
+	const char *file = "/sys/bus/event_source/devices/uprobe/format/retprobe";
+
+	return parse_uint_from_file(file, "config:%d\n");
+}
+
+static int perf_event_open_probe(bool uprobe, bool retprobe, const char *name,
+				 uint64_t offset, int pid)
+{
+	struct perf_event_attr attr = {};
+	char errmsg[STRERR_BUFSIZE];
+	int type, pfd, err;
+
+	type = uprobe ? determine_uprobe_perf_type()
+		      : determine_kprobe_perf_type();
+	if (type < 0) {
+		pr_warning("failed to determine %s perf type: %s\n",
+			   uprobe ? "uprobe" : "kprobe",
+			   libbpf_strerror_r(type, errmsg, sizeof(errmsg)));
+		return type;
+	}
+	if (retprobe) {
+		int bit = uprobe ? determine_uprobe_retprobe_bit()
+				 : determine_kprobe_retprobe_bit();
+
+		if (bit < 0) {
+			pr_warning("failed to determine %s retprobe bit: %s\n",
+				   uprobe ? "uprobe" : "kprobe",
+				   libbpf_strerror_r(bit, errmsg,
+						     sizeof(errmsg)));
+			return bit;
+		}
+		attr.config |= 1 << bit;
+	}
+	attr.size = sizeof(attr);
+	attr.type = type;
+	attr.config1 = (uint64_t)(void *)name; /* kprobe_func or uprobe_path */
+	attr.config2 = offset;		       /* kprobe_addr or probe_offset */
+
+	/* pid filter is meaningful only for uprobes */
+	pfd = syscall(__NR_perf_event_open, &attr,
+		      pid < 0 ? -1 : pid /* pid */,
+		      pid == -1 ? 0 : -1 /* cpu */,
+		      -1 /* group_fd */, PERF_FLAG_FD_CLOEXEC);
+	if (pfd < 0) {
+		err = -errno;
+		pr_warning("%s perf_event_open() failed: %s\n",
+			   uprobe ? "uprobe" : "kprobe",
+			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+		return err;
+	}
+	return pfd;
+}
+
+struct bpf_link *bpf_program__attach_kprobe(struct bpf_program *prog,
+					    bool retprobe,
+					    const char *func_name)
+{
+	char errmsg[STRERR_BUFSIZE];
+	struct bpf_link *link;
+	int pfd, err;
+
+	pfd = perf_event_open_probe(false /* uprobe */, retprobe, func_name,
+				    0 /* offset */, -1 /* pid */);
+	if (pfd < 0) {
+		pr_warning("program '%s': failed to create %s '%s' perf event: %s\n",
+			   bpf_program__title(prog, false),
+			   retprobe ? "kretprobe" : "kprobe", func_name,
+			   libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
+		return ERR_PTR(pfd);
+	}
+	link = bpf_program__attach_perf_event(prog, pfd);
+	if (IS_ERR(link)) {
+		close(pfd);
+		err = PTR_ERR(link);
+		pr_warning("program '%s': failed to attach to %s '%s': %s\n",
+			   bpf_program__title(prog, false),
+			   retprobe ? "kretprobe" : "kprobe", func_name,
+			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+		return link;
+	}
+	return link;
+}
+
+struct bpf_link *bpf_program__attach_uprobe(struct bpf_program *prog,
+					    bool retprobe, pid_t pid,
+					    const char *binary_path,
+					    size_t func_offset)
+{
+	char errmsg[STRERR_BUFSIZE];
+	struct bpf_link *link;
+	int pfd, err;
+
+	pfd = perf_event_open_probe(true /* uprobe */, retprobe,
+				    binary_path, func_offset, pid);
+	if (pfd < 0) {
+		pr_warning("program '%s': failed to create %s '%s:0x%zx' perf event: %s\n",
+			   bpf_program__title(prog, false),
+			   retprobe ? "uretprobe" : "uprobe",
+			   binary_path, func_offset,
+			   libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
+		return ERR_PTR(pfd);
+	}
+	link = bpf_program__attach_perf_event(prog, pfd);
+	if (IS_ERR(link)) {
+		close(pfd);
+		err = PTR_ERR(link);
+		pr_warning("program '%s': failed to attach to %s '%s:0x%zx': %s\n",
+			   bpf_program__title(prog, false),
+			   retprobe ? "uretprobe" : "uprobe",
+			   binary_path, func_offset,
+			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+		return link;
+	}
+	return link;
+}
+
 enum bpf_perf_event_ret
 bpf_perf_event_read_simple(void *mmap_mem, size_t mmap_size, size_t page_size,
 			   void **copy_mem, size_t *copy_size,
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 1bf66c4a9330..bd767cc11967 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -171,6 +171,13 @@ LIBBPF_API int bpf_link__destroy(struct bpf_link *link);
 
 LIBBPF_API struct bpf_link *
 bpf_program__attach_perf_event(struct bpf_program *prog, int pfd);
+LIBBPF_API struct bpf_link *
+bpf_program__attach_kprobe(struct bpf_program *prog, bool retprobe,
+			   const char *func_name);
+LIBBPF_API struct bpf_link *
+bpf_program__attach_uprobe(struct bpf_program *prog, bool retprobe,
+			   pid_t pid, const char *binary_path,
+			   size_t func_offset);
 
 struct bpf_insn;
 
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 756f5aa802e9..57a40fb60718 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -169,7 +169,9 @@ LIBBPF_0.0.4 {
 	global:
 		bpf_link__destroy;
 		bpf_object__load_xattr;
+		bpf_program__attach_kprobe;
 		bpf_program__attach_perf_event;
+		bpf_program__attach_uprobe;
 		btf_dump__dump_type;
 		btf_dump__free;
 		btf_dump__new;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v5 bpf-next 5/9] libbpf: add tracepoint attach API
  2019-07-01 23:58 [PATCH v5 bpf-next 0/9] libbpf: add bpf_link and tracing attach APIs Andrii Nakryiko
                   ` (3 preceding siblings ...)
  2019-07-01 23:58 ` [PATCH v5 bpf-next 4/9] libbpf: add kprobe/uprobe attach API Andrii Nakryiko
@ 2019-07-01 23:58 ` Andrii Nakryiko
  2019-07-01 23:59 ` [PATCH v5 bpf-next 6/9] libbpf: add raw " Andrii Nakryiko
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Andrii Nakryiko @ 2019-07-01 23:58 UTC (permalink / raw)
  To: andrii.nakryiko, bpf, netdev, ast, daniel, kernel-team, yhs
  Cc: Andrii Nakryiko

Allow attaching BPF programs to kernel tracepoint BPF hooks specified by
category and name.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Stanislav Fomichev <sdf@google.com>
---
 tools/lib/bpf/libbpf.c   | 79 ++++++++++++++++++++++++++++++++++++++++
 tools/lib/bpf/libbpf.h   |  4 ++
 tools/lib/bpf/libbpf.map |  1 +
 3 files changed, 84 insertions(+)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 7b6142408b15..d8c1743efc4a 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -4190,6 +4190,85 @@ struct bpf_link *bpf_program__attach_uprobe(struct bpf_program *prog,
 	return link;
 }
 
+static int determine_tracepoint_id(const char *tp_category,
+				   const char *tp_name)
+{
+	char file[PATH_MAX];
+	int ret;
+
+	ret = snprintf(file, sizeof(file),
+		       "/sys/kernel/debug/tracing/events/%s/%s/id",
+		       tp_category, tp_name);
+	if (ret < 0)
+		return -errno;
+	if (ret >= sizeof(file)) {
+		pr_debug("tracepoint %s/%s path is too long\n",
+			 tp_category, tp_name);
+		return -E2BIG;
+	}
+	return parse_uint_from_file(file, "%d\n");
+}
+
+static int perf_event_open_tracepoint(const char *tp_category,
+				      const char *tp_name)
+{
+	struct perf_event_attr attr = {};
+	char errmsg[STRERR_BUFSIZE];
+	int tp_id, pfd, err;
+
+	tp_id = determine_tracepoint_id(tp_category, tp_name);
+	if (tp_id < 0) {
+		pr_warning("failed to determine tracepoint '%s/%s' perf event ID: %s\n",
+			   tp_category, tp_name,
+			   libbpf_strerror_r(tp_id, errmsg, sizeof(errmsg)));
+		return tp_id;
+	}
+
+	attr.type = PERF_TYPE_TRACEPOINT;
+	attr.size = sizeof(attr);
+	attr.config = tp_id;
+
+	pfd = syscall(__NR_perf_event_open, &attr, -1 /* pid */, 0 /* cpu */,
+		      -1 /* group_fd */, PERF_FLAG_FD_CLOEXEC);
+	if (pfd < 0) {
+		err = -errno;
+		pr_warning("tracepoint '%s/%s' perf_event_open() failed: %s\n",
+			   tp_category, tp_name,
+			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+		return err;
+	}
+	return pfd;
+}
+
+struct bpf_link *bpf_program__attach_tracepoint(struct bpf_program *prog,
+						const char *tp_category,
+						const char *tp_name)
+{
+	char errmsg[STRERR_BUFSIZE];
+	struct bpf_link *link;
+	int pfd, err;
+
+	pfd = perf_event_open_tracepoint(tp_category, tp_name);
+	if (pfd < 0) {
+		pr_warning("program '%s': failed to create tracepoint '%s/%s' perf event: %s\n",
+			   bpf_program__title(prog, false),
+			   tp_category, tp_name,
+			   libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
+		return ERR_PTR(pfd);
+	}
+	link = bpf_program__attach_perf_event(prog, pfd);
+	if (IS_ERR(link)) {
+		close(pfd);
+		err = PTR_ERR(link);
+		pr_warning("program '%s': failed to attach to tracepoint '%s/%s': %s\n",
+			   bpf_program__title(prog, false),
+			   tp_category, tp_name,
+			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+		return link;
+	}
+	return link;
+}
+
 enum bpf_perf_event_ret
 bpf_perf_event_read_simple(void *mmap_mem, size_t mmap_size, size_t page_size,
 			   void **copy_mem, size_t *copy_size,
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index bd767cc11967..60611f4b4e1d 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -178,6 +178,10 @@ LIBBPF_API struct bpf_link *
 bpf_program__attach_uprobe(struct bpf_program *prog, bool retprobe,
 			   pid_t pid, const char *binary_path,
 			   size_t func_offset);
+LIBBPF_API struct bpf_link *
+bpf_program__attach_tracepoint(struct bpf_program *prog,
+			       const char *tp_category,
+			       const char *tp_name);
 
 struct bpf_insn;
 
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 57a40fb60718..3c618b75ef65 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -171,6 +171,7 @@ LIBBPF_0.0.4 {
 		bpf_object__load_xattr;
 		bpf_program__attach_kprobe;
 		bpf_program__attach_perf_event;
+		bpf_program__attach_tracepoint;
 		bpf_program__attach_uprobe;
 		btf_dump__dump_type;
 		btf_dump__free;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v5 bpf-next 6/9] libbpf: add raw tracepoint attach API
  2019-07-01 23:58 [PATCH v5 bpf-next 0/9] libbpf: add bpf_link and tracing attach APIs Andrii Nakryiko
                   ` (4 preceding siblings ...)
  2019-07-01 23:58 ` [PATCH v5 bpf-next 5/9] libbpf: add tracepoint " Andrii Nakryiko
@ 2019-07-01 23:59 ` Andrii Nakryiko
  2019-07-01 23:59 ` [PATCH v5 bpf-next 7/9] selftests/bpf: switch test to new attach_perf_event API Andrii Nakryiko
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Andrii Nakryiko @ 2019-07-01 23:59 UTC (permalink / raw)
  To: andrii.nakryiko, bpf, netdev, ast, daniel, kernel-team, yhs
  Cc: Andrii Nakryiko

Add a wrapper utilizing bpf_link "infrastructure" to allow attaching BPF
programs to raw tracepoints.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Stanislav Fomichev <sdf@google.com>
---
 tools/lib/bpf/libbpf.c   | 39 +++++++++++++++++++++++++++++++++++++++
 tools/lib/bpf/libbpf.h   |  3 +++
 tools/lib/bpf/libbpf.map |  1 +
 3 files changed, 43 insertions(+)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index d8c1743efc4a..72ee3d8d20a8 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -4269,6 +4269,45 @@ struct bpf_link *bpf_program__attach_tracepoint(struct bpf_program *prog,
 	return link;
 }
 
+static int bpf_link__destroy_fd(struct bpf_link *link)
+{
+	struct bpf_link_fd *l = (void *)link;
+
+	return close(l->fd);
+}
+
+struct bpf_link *bpf_program__attach_raw_tracepoint(struct bpf_program *prog,
+						    const char *tp_name)
+{
+	char errmsg[STRERR_BUFSIZE];
+	struct bpf_link_fd *link;
+	int prog_fd, pfd;
+
+	prog_fd = bpf_program__fd(prog);
+	if (prog_fd < 0) {
+		pr_warning("program '%s': can't attach before loaded\n",
+			   bpf_program__title(prog, false));
+		return ERR_PTR(-EINVAL);
+	}
+
+	link = malloc(sizeof(*link));
+	if (!link)
+		return ERR_PTR(-ENOMEM);
+	link->link.destroy = &bpf_link__destroy_fd;
+
+	pfd = bpf_raw_tracepoint_open(tp_name, prog_fd);
+	if (pfd < 0) {
+		pfd = -errno;
+		free(link);
+		pr_warning("program '%s': failed to attach to raw tracepoint '%s': %s\n",
+			   bpf_program__title(prog, false), tp_name,
+			   libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
+		return ERR_PTR(pfd);
+	}
+	link->fd = pfd;
+	return (struct bpf_link *)link;
+}
+
 enum bpf_perf_event_ret
 bpf_perf_event_read_simple(void *mmap_mem, size_t mmap_size, size_t page_size,
 			   void **copy_mem, size_t *copy_size,
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 60611f4b4e1d..f55933784f95 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -182,6 +182,9 @@ LIBBPF_API struct bpf_link *
 bpf_program__attach_tracepoint(struct bpf_program *prog,
 			       const char *tp_category,
 			       const char *tp_name);
+LIBBPF_API struct bpf_link *
+bpf_program__attach_raw_tracepoint(struct bpf_program *prog,
+				   const char *tp_name);
 
 struct bpf_insn;
 
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 3c618b75ef65..e6b7d4edbc93 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -171,6 +171,7 @@ LIBBPF_0.0.4 {
 		bpf_object__load_xattr;
 		bpf_program__attach_kprobe;
 		bpf_program__attach_perf_event;
+		bpf_program__attach_raw_tracepoint;
 		bpf_program__attach_tracepoint;
 		bpf_program__attach_uprobe;
 		btf_dump__dump_type;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v5 bpf-next 7/9] selftests/bpf: switch test to new attach_perf_event API
  2019-07-01 23:58 [PATCH v5 bpf-next 0/9] libbpf: add bpf_link and tracing attach APIs Andrii Nakryiko
                   ` (5 preceding siblings ...)
  2019-07-01 23:59 ` [PATCH v5 bpf-next 6/9] libbpf: add raw " Andrii Nakryiko
@ 2019-07-01 23:59 ` Andrii Nakryiko
  2019-07-01 23:59 ` [PATCH v5 bpf-next 8/9] selftests/bpf: add kprobe/uprobe selftests Andrii Nakryiko
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Andrii Nakryiko @ 2019-07-01 23:59 UTC (permalink / raw)
  To: andrii.nakryiko, bpf, netdev, ast, daniel, kernel-team, yhs
  Cc: Andrii Nakryiko

Use new bpf_program__attach_perf_event() in test previously relying on
direct ioctl manipulations.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Reviewed-by: Stanislav Fomichev <sdf@google.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 .../bpf/prog_tests/stacktrace_build_id_nmi.c  | 31 +++++++++----------
 1 file changed, 15 insertions(+), 16 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c
index 1c1a2f75f3d8..9557b7dfb782 100644
--- a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c
+++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c
@@ -17,6 +17,7 @@ static __u64 read_perf_max_sample_freq(void)
 void test_stacktrace_build_id_nmi(void)
 {
 	int control_map_fd, stackid_hmap_fd, stackmap_fd, stack_amap_fd;
+	const char *prog_name = "tracepoint/random/urandom_read";
 	const char *file = "./test_stacktrace_build_id.o";
 	int err, pmu_fd, prog_fd;
 	struct perf_event_attr attr = {
@@ -25,7 +26,9 @@ void test_stacktrace_build_id_nmi(void)
 		.config = PERF_COUNT_HW_CPU_CYCLES,
 	};
 	__u32 key, previous_key, val, duration = 0;
+	struct bpf_program *prog;
 	struct bpf_object *obj;
+	struct bpf_link *link;
 	char buf[256];
 	int i, j;
 	struct bpf_stack_build_id id_offs[PERF_MAX_STACK_DEPTH];
@@ -39,6 +42,10 @@ void test_stacktrace_build_id_nmi(void)
 	if (CHECK(err, "prog_load", "err %d errno %d\n", err, errno))
 		return;
 
+	prog = bpf_object__find_program_by_title(obj, prog_name);
+	if (CHECK(!prog, "find_prog", "prog '%s' not found\n", prog_name))
+		goto close_prog;
+
 	pmu_fd = syscall(__NR_perf_event_open, &attr, -1 /* pid */,
 			 0 /* cpu 0 */, -1 /* group id */,
 			 0 /* flags */);
@@ -47,15 +54,12 @@ void test_stacktrace_build_id_nmi(void)
 		  pmu_fd, errno))
 		goto close_prog;
 
-	err = ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0);
-	if (CHECK(err, "perf_event_ioc_enable", "err %d errno %d\n",
-		  err, errno))
-		goto close_pmu;
-
-	err = ioctl(pmu_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
-	if (CHECK(err, "perf_event_ioc_set_bpf", "err %d errno %d\n",
-		  err, errno))
-		goto disable_pmu;
+	link = bpf_program__attach_perf_event(prog, pmu_fd);
+	if (CHECK(IS_ERR(link), "attach_perf_event",
+		  "err %ld\n", PTR_ERR(link))) {
+		close(pmu_fd);
+		goto close_prog;
+	}
 
 	/* find map fds */
 	control_map_fd = bpf_find_map(__func__, obj, "control_map");
@@ -134,8 +138,7 @@ void test_stacktrace_build_id_nmi(void)
 	 * try it one more time.
 	 */
 	if (build_id_matches < 1 && retry--) {
-		ioctl(pmu_fd, PERF_EVENT_IOC_DISABLE);
-		close(pmu_fd);
+		bpf_link__destroy(link);
 		bpf_object__close(obj);
 		printf("%s:WARN:Didn't find expected build ID from the map, retrying\n",
 		       __func__);
@@ -154,11 +157,7 @@ void test_stacktrace_build_id_nmi(void)
 	 */
 
 disable_pmu:
-	ioctl(pmu_fd, PERF_EVENT_IOC_DISABLE);
-
-close_pmu:
-	close(pmu_fd);
-
+	bpf_link__destroy(link);
 close_prog:
 	bpf_object__close(obj);
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v5 bpf-next 8/9] selftests/bpf: add kprobe/uprobe selftests
  2019-07-01 23:58 [PATCH v5 bpf-next 0/9] libbpf: add bpf_link and tracing attach APIs Andrii Nakryiko
                   ` (6 preceding siblings ...)
  2019-07-01 23:59 ` [PATCH v5 bpf-next 7/9] selftests/bpf: switch test to new attach_perf_event API Andrii Nakryiko
@ 2019-07-01 23:59 ` Andrii Nakryiko
  2019-07-06 17:21   ` Yonghong Song
  2019-07-01 23:59 ` [PATCH v5 bpf-next 9/9] selftests/bpf: convert existing tracepoint tests to new APIs Andrii Nakryiko
  2019-07-02  0:36 ` [PATCH v5 bpf-next 0/9] libbpf: add bpf_link and tracing attach APIs Yonghong Song
  9 siblings, 1 reply; 19+ messages in thread
From: Andrii Nakryiko @ 2019-07-01 23:59 UTC (permalink / raw)
  To: andrii.nakryiko, bpf, netdev, ast, daniel, kernel-team, yhs
  Cc: Andrii Nakryiko

Add tests verifying kprobe/kretprobe/uprobe/uretprobe APIs work as
expected.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Reviewed-by: Stanislav Fomichev <sdf@google.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 .../selftests/bpf/prog_tests/attach_probe.c   | 166 ++++++++++++++++++
 .../selftests/bpf/progs/test_attach_probe.c   |  55 ++++++
 2 files changed, 221 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/attach_probe.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_attach_probe.c

diff --git a/tools/testing/selftests/bpf/prog_tests/attach_probe.c b/tools/testing/selftests/bpf/prog_tests/attach_probe.c
new file mode 100644
index 000000000000..a4686395522c
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/attach_probe.c
@@ -0,0 +1,166 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+
+ssize_t get_base_addr() {
+	size_t start;
+	char buf[256];
+	FILE *f;
+
+	f = fopen("/proc/self/maps", "r");
+	if (!f)
+		return -errno;
+
+	while (fscanf(f, "%zx-%*x %s %*s\n", &start, buf) == 2) {
+		if (strcmp(buf, "r-xp") == 0) {
+			fclose(f);
+			return start;
+		}
+	}
+
+	fclose(f);
+	return -EINVAL;
+}
+
+#ifdef __x86_64__
+#define SYS_KPROBE_NAME "__x64_sys_nanosleep"
+#else
+#define SYS_KPROBE_NAME "sys_nanosleep"
+#endif
+
+void test_attach_probe(void)
+{
+	const char *kprobe_name = "kprobe/sys_nanosleep";
+	const char *kretprobe_name = "kretprobe/sys_nanosleep";
+	const char *uprobe_name = "uprobe/trigger_func";
+	const char *uretprobe_name = "uretprobe/trigger_func";
+	const int kprobe_idx = 0, kretprobe_idx = 1;
+	const int uprobe_idx = 2, uretprobe_idx = 3;
+	const char *file = "./test_attach_probe.o";
+	struct bpf_program *kprobe_prog, *kretprobe_prog;
+	struct bpf_program *uprobe_prog, *uretprobe_prog;
+	struct bpf_object *obj;
+	int err, prog_fd, duration = 0, res;
+	struct bpf_link *kprobe_link = NULL;
+	struct bpf_link *kretprobe_link = NULL;
+	struct bpf_link *uprobe_link = NULL;
+	struct bpf_link *uretprobe_link = NULL;
+	int results_map_fd;
+	size_t uprobe_offset;
+	ssize_t base_addr;
+
+	base_addr = get_base_addr();
+	if (CHECK(base_addr < 0, "get_base_addr",
+		  "failed to find base addr: %zd", base_addr))
+		return;
+	uprobe_offset = (size_t)&get_base_addr - base_addr;
+
+	/* load programs */
+	err = bpf_prog_load(file, BPF_PROG_TYPE_KPROBE, &obj, &prog_fd);
+	if (CHECK(err, "obj_load", "err %d errno %d\n", err, errno))
+		return;
+
+	kprobe_prog = bpf_object__find_program_by_title(obj, kprobe_name);
+	if (CHECK(!kprobe_prog, "find_probe",
+		  "prog '%s' not found\n", kprobe_name))
+		goto cleanup;
+	kretprobe_prog = bpf_object__find_program_by_title(obj, kretprobe_name);
+	if (CHECK(!kretprobe_prog, "find_probe",
+		  "prog '%s' not found\n", kretprobe_name))
+		goto cleanup;
+	uprobe_prog = bpf_object__find_program_by_title(obj, uprobe_name);
+	if (CHECK(!uprobe_prog, "find_probe",
+		  "prog '%s' not found\n", uprobe_name))
+		goto cleanup;
+	uretprobe_prog = bpf_object__find_program_by_title(obj, uretprobe_name);
+	if (CHECK(!uretprobe_prog, "find_probe",
+		  "prog '%s' not found\n", uretprobe_name))
+		goto cleanup;
+
+	/* load maps */
+	results_map_fd = bpf_find_map(__func__, obj, "results_map");
+	if (CHECK(results_map_fd < 0, "find_results_map",
+		  "err %d\n", results_map_fd))
+		goto cleanup;
+
+	kprobe_link = bpf_program__attach_kprobe(kprobe_prog,
+						 false /* retprobe */,
+						 SYS_KPROBE_NAME);
+	if (CHECK(IS_ERR(kprobe_link), "attach_kprobe",
+		  "err %ld\n", PTR_ERR(kprobe_link))) {
+		kprobe_link = NULL;
+		goto cleanup;
+	}
+	kretprobe_link = bpf_program__attach_kprobe(kretprobe_prog,
+						    true /* retprobe */,
+						    SYS_KPROBE_NAME);
+	if (CHECK(IS_ERR(kretprobe_link), "attach_kretprobe",
+		  "err %ld\n", PTR_ERR(kretprobe_link))) {
+		kretprobe_link = NULL;
+		goto cleanup;
+	}
+	uprobe_link = bpf_program__attach_uprobe(uprobe_prog,
+						 false /* retprobe */,
+						 0 /* self pid */,
+						 "/proc/self/exe",
+						 uprobe_offset);
+	if (CHECK(IS_ERR(uprobe_link), "attach_uprobe",
+		  "err %ld\n", PTR_ERR(uprobe_link))) {
+		uprobe_link = NULL;
+		goto cleanup;
+	}
+	uretprobe_link = bpf_program__attach_uprobe(uretprobe_prog,
+						    true /* retprobe */,
+						    -1 /* any pid */,
+						    "/proc/self/exe",
+						    uprobe_offset);
+	if (CHECK(IS_ERR(uretprobe_link), "attach_uretprobe",
+		  "err %ld\n", PTR_ERR(uretprobe_link))) {
+		uretprobe_link = NULL;
+		goto cleanup;
+	}
+
+	/* trigger & validate kprobe && kretprobe */
+	usleep(1);
+
+	err = bpf_map_lookup_elem(results_map_fd, &kprobe_idx, &res);
+	if (CHECK(err, "get_kprobe_res",
+		  "failed to get kprobe res: %d\n", err))
+		goto cleanup;
+	if (CHECK(res != kprobe_idx + 1, "check_kprobe_res",
+		  "wrong kprobe res: %d\n", res))
+		goto cleanup;
+
+	err = bpf_map_lookup_elem(results_map_fd, &kretprobe_idx, &res);
+	if (CHECK(err, "get_kretprobe_res",
+		  "failed to get kretprobe res: %d\n", err))
+		goto cleanup;
+	if (CHECK(res != kretprobe_idx + 1, "check_kretprobe_res",
+		  "wrong kretprobe res: %d\n", res))
+		goto cleanup;
+
+	/* trigger & validate uprobe & uretprobe */
+	get_base_addr();
+
+	err = bpf_map_lookup_elem(results_map_fd, &uprobe_idx, &res);
+	if (CHECK(err, "get_uprobe_res",
+		  "failed to get uprobe res: %d\n", err))
+		goto cleanup;
+	if (CHECK(res != uprobe_idx + 1, "check_uprobe_res",
+		  "wrong uprobe res: %d\n", res))
+		goto cleanup;
+
+	err = bpf_map_lookup_elem(results_map_fd, &uretprobe_idx, &res);
+	if (CHECK(err, "get_uretprobe_res",
+		  "failed to get uretprobe res: %d\n", err))
+		goto cleanup;
+	if (CHECK(res != uretprobe_idx + 1, "check_uretprobe_res",
+		  "wrong uretprobe res: %d\n", res))
+		goto cleanup;
+
+cleanup:
+	bpf_link__destroy(kprobe_link);
+	bpf_link__destroy(kretprobe_link);
+	bpf_link__destroy(uprobe_link);
+	bpf_link__destroy(uretprobe_link);
+	bpf_object__close(obj);
+}
diff --git a/tools/testing/selftests/bpf/progs/test_attach_probe.c b/tools/testing/selftests/bpf/progs/test_attach_probe.c
new file mode 100644
index 000000000000..7a7c5cd728c8
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_attach_probe.c
@@ -0,0 +1,55 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2017 Facebook
+
+#include <linux/ptrace.h>
+#include <linux/bpf.h>
+#include "bpf_helpers.h"
+
+struct {
+	int type;
+	int max_entries;
+	int *key;
+	int *value;
+} results_map SEC(".maps") = {
+	.type = BPF_MAP_TYPE_ARRAY,
+	.max_entries = 4,
+};
+
+SEC("kprobe/sys_nanosleep")
+int handle_sys_nanosleep_entry(struct pt_regs *ctx)
+{
+	const int key = 0, value = 1;
+
+	bpf_map_update_elem(&results_map, &key, &value, 0);
+	return 0;
+}
+
+SEC("kretprobe/sys_nanosleep")
+int handle_sys_getpid_return(struct pt_regs *ctx)
+{
+	const int key = 1, value = 2;
+
+	bpf_map_update_elem(&results_map, &key, &value, 0);
+	return 0;
+}
+
+SEC("uprobe/trigger_func")
+int handle_uprobe_entry(struct pt_regs *ctx)
+{
+	const int key = 2, value = 3;
+
+	bpf_map_update_elem(&results_map, &key, &value, 0);
+	return 0;
+}
+
+SEC("uretprobe/trigger_func")
+int handle_uprobe_return(struct pt_regs *ctx)
+{
+	const int key = 3, value = 4;
+
+	bpf_map_update_elem(&results_map, &key, &value, 0);
+	return 0;
+}
+
+char _license[] SEC("license") = "GPL";
+__u32 _version SEC("version") = 1;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v5 bpf-next 9/9] selftests/bpf: convert existing tracepoint tests to new APIs
  2019-07-01 23:58 [PATCH v5 bpf-next 0/9] libbpf: add bpf_link and tracing attach APIs Andrii Nakryiko
                   ` (7 preceding siblings ...)
  2019-07-01 23:59 ` [PATCH v5 bpf-next 8/9] selftests/bpf: add kprobe/uprobe selftests Andrii Nakryiko
@ 2019-07-01 23:59 ` Andrii Nakryiko
  2019-07-02  0:36 ` [PATCH v5 bpf-next 0/9] libbpf: add bpf_link and tracing attach APIs Yonghong Song
  9 siblings, 0 replies; 19+ messages in thread
From: Andrii Nakryiko @ 2019-07-01 23:59 UTC (permalink / raw)
  To: andrii.nakryiko, bpf, netdev, ast, daniel, kernel-team, yhs
  Cc: Andrii Nakryiko

Convert some existing tests that attach to tracepoints to use
bpf_program__attach_tracepoint API instead.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Reviewed-by: Stanislav Fomichev <sdf@google.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 .../bpf/prog_tests/stacktrace_build_id.c      | 55 ++++---------------
 .../selftests/bpf/prog_tests/stacktrace_map.c | 43 +++------------
 .../bpf/prog_tests/stacktrace_map_raw_tp.c    | 15 ++++-
 3 files changed, 32 insertions(+), 81 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id.c
index 3aab2b083c71..ac44fda84833 100644
--- a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id.c
+++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id.c
@@ -4,11 +4,13 @@
 void test_stacktrace_build_id(void)
 {
 	int control_map_fd, stackid_hmap_fd, stackmap_fd, stack_amap_fd;
+	const char *prog_name = "tracepoint/random/urandom_read";
 	const char *file = "./test_stacktrace_build_id.o";
-	int bytes, efd, err, pmu_fd, prog_fd, stack_trace_len;
-	struct perf_event_attr attr = {};
+	int err, prog_fd, stack_trace_len;
 	__u32 key, previous_key, val, duration = 0;
+	struct bpf_program *prog;
 	struct bpf_object *obj;
+	struct bpf_link *link = NULL;
 	char buf[256];
 	int i, j;
 	struct bpf_stack_build_id id_offs[PERF_MAX_STACK_DEPTH];
@@ -18,44 +20,16 @@ void test_stacktrace_build_id(void)
 retry:
 	err = bpf_prog_load(file, BPF_PROG_TYPE_TRACEPOINT, &obj, &prog_fd);
 	if (CHECK(err, "prog_load", "err %d errno %d\n", err, errno))
-		goto out;
+		return;
 
-	/* Get the ID for the sched/sched_switch tracepoint */
-	snprintf(buf, sizeof(buf),
-		 "/sys/kernel/debug/tracing/events/random/urandom_read/id");
-	efd = open(buf, O_RDONLY, 0);
-	if (CHECK(efd < 0, "open", "err %d errno %d\n", efd, errno))
+	prog = bpf_object__find_program_by_title(obj, prog_name);
+	if (CHECK(!prog, "find_prog", "prog '%s' not found\n", prog_name))
 		goto close_prog;
 
-	bytes = read(efd, buf, sizeof(buf));
-	close(efd);
-	if (CHECK(bytes <= 0 || bytes >= sizeof(buf),
-		  "read", "bytes %d errno %d\n", bytes, errno))
+	link = bpf_program__attach_tracepoint(prog, "random", "urandom_read");
+	if (CHECK(IS_ERR(link), "attach_tp", "err %ld\n", PTR_ERR(link)))
 		goto close_prog;
 
-	/* Open the perf event and attach bpf progrram */
-	attr.config = strtol(buf, NULL, 0);
-	attr.type = PERF_TYPE_TRACEPOINT;
-	attr.sample_type = PERF_SAMPLE_RAW | PERF_SAMPLE_CALLCHAIN;
-	attr.sample_period = 1;
-	attr.wakeup_events = 1;
-	pmu_fd = syscall(__NR_perf_event_open, &attr, -1 /* pid */,
-			 0 /* cpu 0 */, -1 /* group id */,
-			 0 /* flags */);
-	if (CHECK(pmu_fd < 0, "perf_event_open", "err %d errno %d\n",
-		  pmu_fd, errno))
-		goto close_prog;
-
-	err = ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0);
-	if (CHECK(err, "perf_event_ioc_enable", "err %d errno %d\n",
-		  err, errno))
-		goto close_pmu;
-
-	err = ioctl(pmu_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
-	if (CHECK(err, "perf_event_ioc_set_bpf", "err %d errno %d\n",
-		  err, errno))
-		goto disable_pmu;
-
 	/* find map fds */
 	control_map_fd = bpf_find_map(__func__, obj, "control_map");
 	if (CHECK(control_map_fd < 0, "bpf_find_map control_map",
@@ -133,8 +107,7 @@ void test_stacktrace_build_id(void)
 	 * try it one more time.
 	 */
 	if (build_id_matches < 1 && retry--) {
-		ioctl(pmu_fd, PERF_EVENT_IOC_DISABLE);
-		close(pmu_fd);
+		bpf_link__destroy(link);
 		bpf_object__close(obj);
 		printf("%s:WARN:Didn't find expected build ID from the map, retrying\n",
 		       __func__);
@@ -152,14 +125,8 @@ void test_stacktrace_build_id(void)
 	      "err %d errno %d\n", err, errno);
 
 disable_pmu:
-	ioctl(pmu_fd, PERF_EVENT_IOC_DISABLE);
-
-close_pmu:
-	close(pmu_fd);
+	bpf_link__destroy(link);
 
 close_prog:
 	bpf_object__close(obj);
-
-out:
-	return;
 }
diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_map.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_map.c
index 2bfd50a0d6d1..fc539335c5b3 100644
--- a/tools/testing/selftests/bpf/prog_tests/stacktrace_map.c
+++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_map.c
@@ -4,50 +4,26 @@
 void test_stacktrace_map(void)
 {
 	int control_map_fd, stackid_hmap_fd, stackmap_fd, stack_amap_fd;
+	const char *prog_name = "tracepoint/sched/sched_switch";
+	int err, prog_fd, stack_trace_len;
 	const char *file = "./test_stacktrace_map.o";
-	int bytes, efd, err, pmu_fd, prog_fd, stack_trace_len;
-	struct perf_event_attr attr = {};
 	__u32 key, val, duration = 0;
+	struct bpf_program *prog;
 	struct bpf_object *obj;
-	char buf[256];
+	struct bpf_link *link;
 
 	err = bpf_prog_load(file, BPF_PROG_TYPE_TRACEPOINT, &obj, &prog_fd);
 	if (CHECK(err, "prog_load", "err %d errno %d\n", err, errno))
 		return;
 
-	/* Get the ID for the sched/sched_switch tracepoint */
-	snprintf(buf, sizeof(buf),
-		 "/sys/kernel/debug/tracing/events/sched/sched_switch/id");
-	efd = open(buf, O_RDONLY, 0);
-	if (CHECK(efd < 0, "open", "err %d errno %d\n", efd, errno))
+	prog = bpf_object__find_program_by_title(obj, prog_name);
+	if (CHECK(!prog, "find_prog", "prog '%s' not found\n", prog_name))
 		goto close_prog;
 
-	bytes = read(efd, buf, sizeof(buf));
-	close(efd);
-	if (bytes <= 0 || bytes >= sizeof(buf))
+	link = bpf_program__attach_tracepoint(prog, "sched", "sched_switch");
+	if (CHECK(IS_ERR(link), "attach_tp", "err %ld\n", PTR_ERR(link)))
 		goto close_prog;
 
-	/* Open the perf event and attach bpf progrram */
-	attr.config = strtol(buf, NULL, 0);
-	attr.type = PERF_TYPE_TRACEPOINT;
-	attr.sample_type = PERF_SAMPLE_RAW | PERF_SAMPLE_CALLCHAIN;
-	attr.sample_period = 1;
-	attr.wakeup_events = 1;
-	pmu_fd = syscall(__NR_perf_event_open, &attr, -1 /* pid */,
-			 0 /* cpu 0 */, -1 /* group id */,
-			 0 /* flags */);
-	if (CHECK(pmu_fd < 0, "perf_event_open", "err %d errno %d\n",
-		  pmu_fd, errno))
-		goto close_prog;
-
-	err = ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0);
-	if (err)
-		goto disable_pmu;
-
-	err = ioctl(pmu_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
-	if (err)
-		goto disable_pmu;
-
 	/* find map fds */
 	control_map_fd = bpf_find_map(__func__, obj, "control_map");
 	if (control_map_fd < 0)
@@ -96,8 +72,7 @@ void test_stacktrace_map(void)
 disable_pmu:
 	error_cnt++;
 disable_pmu_noerr:
-	ioctl(pmu_fd, PERF_EVENT_IOC_DISABLE);
-	close(pmu_fd);
+	bpf_link__destroy(link);
 close_prog:
 	bpf_object__close(obj);
 }
diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_map_raw_tp.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_map_raw_tp.c
index 1f8387d80fd7..fbfa8e76cf63 100644
--- a/tools/testing/selftests/bpf/prog_tests/stacktrace_map_raw_tp.c
+++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_map_raw_tp.c
@@ -3,18 +3,25 @@
 
 void test_stacktrace_map_raw_tp(void)
 {
+	const char *prog_name = "tracepoint/sched/sched_switch";
 	int control_map_fd, stackid_hmap_fd, stackmap_fd;
 	const char *file = "./test_stacktrace_map.o";
-	int efd, err, prog_fd;
 	__u32 key, val, duration = 0;
+	int err, prog_fd;
+	struct bpf_program *prog;
 	struct bpf_object *obj;
+	struct bpf_link *link = NULL;
 
 	err = bpf_prog_load(file, BPF_PROG_TYPE_RAW_TRACEPOINT, &obj, &prog_fd);
 	if (CHECK(err, "prog_load raw tp", "err %d errno %d\n", err, errno))
 		return;
 
-	efd = bpf_raw_tracepoint_open("sched_switch", prog_fd);
-	if (CHECK(efd < 0, "raw_tp_open", "err %d errno %d\n", efd, errno))
+	prog = bpf_object__find_program_by_title(obj, prog_name);
+	if (CHECK(!prog, "find_prog", "prog '%s' not found\n", prog_name))
+		goto close_prog;
+
+	link = bpf_program__attach_raw_tracepoint(prog, "sched_switch");
+	if (CHECK(IS_ERR(link), "attach_raw_tp", "err %ld\n", PTR_ERR(link)))
 		goto close_prog;
 
 	/* find map fds */
@@ -55,5 +62,7 @@ void test_stacktrace_map_raw_tp(void)
 close_prog:
 	error_cnt++;
 close_prog_noerr:
+	if (!IS_ERR_OR_NULL(link))
+		bpf_link__destroy(link);
 	bpf_object__close(obj);
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v5 bpf-next 0/9] libbpf: add bpf_link and tracing attach APIs
  2019-07-01 23:58 [PATCH v5 bpf-next 0/9] libbpf: add bpf_link and tracing attach APIs Andrii Nakryiko
                   ` (8 preceding siblings ...)
  2019-07-01 23:59 ` [PATCH v5 bpf-next 9/9] selftests/bpf: convert existing tracepoint tests to new APIs Andrii Nakryiko
@ 2019-07-02  0:36 ` Yonghong Song
  9 siblings, 0 replies; 19+ messages in thread
From: Yonghong Song @ 2019-07-02  0:36 UTC (permalink / raw)
  To: Andrii Nakryiko, andrii.nakryiko, bpf, netdev,
	Alexei Starovoitov, daniel, Kernel Team



On 7/1/19 4:58 PM, Andrii Nakryiko wrote:
> This patchset adds the following APIs to allow attaching BPF programs to
> tracing entities:
> - bpf_program__attach_perf_event for attaching to any opened perf event FD,
>    allowing users full control;
> - bpf_program__attach_kprobe for attaching to kernel probes (both entry and
>    return probes);
> - bpf_program__attach_uprobe for attaching to user probes (both entry/return);
> - bpf_program__attach_tracepoint for attaching to kernel tracepoints;
> - bpf_program__attach_raw_tracepoint for attaching to raw kernel tracepoint
>    (wrapper around bpf_raw_tracepoint_open);
> 
> This set of APIs makes libbpf more useful for tracing applications.
> 
> All attach APIs return abstract struct bpf_link that encapsulates logic of
> detaching BPF program. See patch #2 for details. bpf_assoc was considered as
> an alternative name for this opaque "handle", but bpf_link seems to be
> appropriate semantically and is nice and short.
> 
> Pre-patch #1 makes internal libbpf_strerror_r helper function work w/ negative
> error codes, lifting the burder off callers to keep track of error sign.
> Patch #2 adds bpf_link abstraction.
> Patch #3 adds attach_perf_event, which is the base for all other APIs.
> Patch #4 adds kprobe/uprobe APIs.
> Patch #5 adds tracepoint API.
> Patch #6 adds raw_tracepoint API.
> Patch #7 converts one existing test to use attach_perf_event.
> Patch #8 adds new kprobe/uprobe tests.
> Patch #9 converts some selftests currently using tracepoint to new APIs.
> 
> v4->v5:
> - typo and small nits (Yonghong);
> - validate pfd in attach_perf_event (Yonghong);
> - parse_uint_from_file fixes (Yonghong);
> - check for malloc failure in attach_raw_tracepoint (Yonghong);
> - attach_probes selftests clean up fixes (Yonghong);
> v3->v4:
> - proper errno handling (Stanislav);
> - bpf_fd -> prog_fd (Stanislav);
> - switch to fprintf (Song);
> v2->v3:
> - added bpf_link concept (Daniel);
> - didn't add generic bpf_link__attach_program for reasons described in [0];
> - dropped Stanislav's Reviewed-by from patches #2-#6, in case he doesn't like
>    the change;
> v1->v2:
> - preserve errno before close() call (Stanislav);
> - use libbpf_perf_event_disable_and_close in selftest (Stanislav);
> - remove unnecessary memset (Stanislav);
> 
> [0] https://lore.kernel.org/bpf/CAEf4BzZ7EM5eP2eaZn7T2Yb5QgVRiwAs+epeLR1g01TTx-6m6Q@mail.gmail.com/
> 
> Andrii Nakryiko (9):
>    libbpf: make libbpf_strerror_r agnostic to sign of error
>    libbpf: introduce concept of bpf_link
>    libbpf: add ability to attach/detach BPF program to perf event
>    libbpf: add kprobe/uprobe attach API
>    libbpf: add tracepoint attach API
>    libbpf: add raw tracepoint attach API
>    selftests/bpf: switch test to new attach_perf_event API
>    selftests/bpf: add kprobe/uprobe selftests
>    selftests/bpf: convert existing tracepoint tests to new APIs
> 
>   tools/lib/bpf/libbpf.c                        | 367 ++++++++++++++++++
>   tools/lib/bpf/libbpf.h                        |  21 +
>   tools/lib/bpf/libbpf.map                      |   8 +-
>   tools/lib/bpf/str_error.c                     |   2 +-
>   .../selftests/bpf/prog_tests/attach_probe.c   | 166 ++++++++
>   .../bpf/prog_tests/stacktrace_build_id.c      |  55 +--
>   .../bpf/prog_tests/stacktrace_build_id_nmi.c  |  31 +-
>   .../selftests/bpf/prog_tests/stacktrace_map.c |  43 +-
>   .../bpf/prog_tests/stacktrace_map_raw_tp.c    |  15 +-
>   .../selftests/bpf/progs/test_attach_probe.c   |  55 +++
>   10 files changed, 664 insertions(+), 99 deletions(-)
>   create mode 100644 tools/testing/selftests/bpf/prog_tests/attach_probe.c
>   create mode 100644 tools/testing/selftests/bpf/progs/test_attach_probe.c

Looks good to me. Ack for the whole series.
Acked-by: Yonghong Song <yhs@fb.com>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v5 bpf-next 4/9] libbpf: add kprobe/uprobe attach API
  2019-07-01 23:58 ` [PATCH v5 bpf-next 4/9] libbpf: add kprobe/uprobe attach API Andrii Nakryiko
@ 2019-07-03 12:39   ` Daniel Borkmann
  2019-07-03 16:47     ` Andrii Nakryiko
  0 siblings, 1 reply; 19+ messages in thread
From: Daniel Borkmann @ 2019-07-03 12:39 UTC (permalink / raw)
  To: Andrii Nakryiko, andrii.nakryiko, bpf, netdev, ast, kernel-team, yhs

On 07/02/2019 01:58 AM, Andrii Nakryiko wrote:
> Add ability to attach to kernel and user probes and retprobes.
> Implementation depends on perf event support for kprobes/uprobes.
> 
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> Reviewed-by: Stanislav Fomichev <sdf@google.com>
> ---
>  tools/lib/bpf/libbpf.c   | 169 +++++++++++++++++++++++++++++++++++++++
>  tools/lib/bpf/libbpf.h   |   7 ++
>  tools/lib/bpf/libbpf.map |   2 +
>  3 files changed, 178 insertions(+)
> 
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index bcaa294f819d..7b6142408b15 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -4021,6 +4021,175 @@ struct bpf_link *bpf_program__attach_perf_event(struct bpf_program *prog,
>  	return (struct bpf_link *)link;
>  }
>  
> +/*
> + * this function is expected to parse integer in the range of [0, 2^31-1] from
> + * given file using scanf format string fmt. If actual parsed value is
> + * negative, the result might be indistinguishable from error
> + */
> +static int parse_uint_from_file(const char *file, const char *fmt)
> +{
> +	char buf[STRERR_BUFSIZE];
> +	int err, ret;
> +	FILE *f;
> +
> +	f = fopen(file, "r");
> +	if (!f) {
> +		err = -errno;
> +		pr_debug("failed to open '%s': %s\n", file,
> +			 libbpf_strerror_r(err, buf, sizeof(buf)));
> +		return err;
> +	}
> +	err = fscanf(f, fmt, &ret);
> +	if (err != 1) {
> +		err = err == EOF ? -EIO : -errno;
> +		pr_debug("failed to parse '%s': %s\n", file,
> +			libbpf_strerror_r(err, buf, sizeof(buf)));
> +		fclose(f);
> +		return err;
> +	}
> +	fclose(f);
> +	return ret;
> +}
> +
> +static int determine_kprobe_perf_type(void)
> +{
> +	const char *file = "/sys/bus/event_source/devices/kprobe/type";
> +
> +	return parse_uint_from_file(file, "%d\n");
> +}
> +
> +static int determine_uprobe_perf_type(void)
> +{
> +	const char *file = "/sys/bus/event_source/devices/uprobe/type";
> +
> +	return parse_uint_from_file(file, "%d\n");
> +}
> +
> +static int determine_kprobe_retprobe_bit(void)
> +{
> +	const char *file = "/sys/bus/event_source/devices/kprobe/format/retprobe";
> +
> +	return parse_uint_from_file(file, "config:%d\n");
> +}
> +
> +static int determine_uprobe_retprobe_bit(void)
> +{
> +	const char *file = "/sys/bus/event_source/devices/uprobe/format/retprobe";
> +
> +	return parse_uint_from_file(file, "config:%d\n");
> +}
> +
> +static int perf_event_open_probe(bool uprobe, bool retprobe, const char *name,
> +				 uint64_t offset, int pid)
> +{
> +	struct perf_event_attr attr = {};
> +	char errmsg[STRERR_BUFSIZE];
> +	int type, pfd, err;
> +
> +	type = uprobe ? determine_uprobe_perf_type()
> +		      : determine_kprobe_perf_type();
> +	if (type < 0) {
> +		pr_warning("failed to determine %s perf type: %s\n",
> +			   uprobe ? "uprobe" : "kprobe",
> +			   libbpf_strerror_r(type, errmsg, sizeof(errmsg)));
> +		return type;
> +	}
> +	if (retprobe) {
> +		int bit = uprobe ? determine_uprobe_retprobe_bit()
> +				 : determine_kprobe_retprobe_bit();
> +
> +		if (bit < 0) {
> +			pr_warning("failed to determine %s retprobe bit: %s\n",
> +				   uprobe ? "uprobe" : "kprobe",
> +				   libbpf_strerror_r(bit, errmsg,
> +						     sizeof(errmsg)));
> +			return bit;
> +		}
> +		attr.config |= 1 << bit;
> +	}
> +	attr.size = sizeof(attr);
> +	attr.type = type;
> +	attr.config1 = (uint64_t)(void *)name; /* kprobe_func or uprobe_path */
> +	attr.config2 = offset;		       /* kprobe_addr or probe_offset */
> +
> +	/* pid filter is meaningful only for uprobes */
> +	pfd = syscall(__NR_perf_event_open, &attr,
> +		      pid < 0 ? -1 : pid /* pid */,
> +		      pid == -1 ? 0 : -1 /* cpu */,
> +		      -1 /* group_fd */, PERF_FLAG_FD_CLOEXEC);
> +	if (pfd < 0) {
> +		err = -errno;
> +		pr_warning("%s perf_event_open() failed: %s\n",
> +			   uprobe ? "uprobe" : "kprobe",
> +			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
> +		return err;
> +	}
> +	return pfd;
> +}
> +
> +struct bpf_link *bpf_program__attach_kprobe(struct bpf_program *prog,
> +					    bool retprobe,
> +					    const char *func_name)
> +{
> +	char errmsg[STRERR_BUFSIZE];
> +	struct bpf_link *link;
> +	int pfd, err;
> +
> +	pfd = perf_event_open_probe(false /* uprobe */, retprobe, func_name,
> +				    0 /* offset */, -1 /* pid */);
> +	if (pfd < 0) {
> +		pr_warning("program '%s': failed to create %s '%s' perf event: %s\n",
> +			   bpf_program__title(prog, false),
> +			   retprobe ? "kretprobe" : "kprobe", func_name,
> +			   libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
> +		return ERR_PTR(pfd);
> +	}
> +	link = bpf_program__attach_perf_event(prog, pfd);
> +	if (IS_ERR(link)) {
> +		close(pfd);
> +		err = PTR_ERR(link);
> +		pr_warning("program '%s': failed to attach to %s '%s': %s\n",
> +			   bpf_program__title(prog, false),
> +			   retprobe ? "kretprobe" : "kprobe", func_name,
> +			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
> +		return link;
> +	}
> +	return link;
> +}
> +
> +struct bpf_link *bpf_program__attach_uprobe(struct bpf_program *prog,
> +					    bool retprobe, pid_t pid,
> +					    const char *binary_path,
> +					    size_t func_offset)
> +{
> +	char errmsg[STRERR_BUFSIZE];
> +	struct bpf_link *link;
> +	int pfd, err;
> +
> +	pfd = perf_event_open_probe(true /* uprobe */, retprobe,
> +				    binary_path, func_offset, pid);
> +	if (pfd < 0) {
> +		pr_warning("program '%s': failed to create %s '%s:0x%zx' perf event: %s\n",
> +			   bpf_program__title(prog, false),
> +			   retprobe ? "uretprobe" : "uprobe",
> +			   binary_path, func_offset,
> +			   libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
> +		return ERR_PTR(pfd);
> +	}
> +	link = bpf_program__attach_perf_event(prog, pfd);
> +	if (IS_ERR(link)) {
> +		close(pfd);
> +		err = PTR_ERR(link);
> +		pr_warning("program '%s': failed to attach to %s '%s:0x%zx': %s\n",
> +			   bpf_program__title(prog, false),
> +			   retprobe ? "uretprobe" : "uprobe",
> +			   binary_path, func_offset,
> +			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
> +		return link;
> +	}
> +	return link;
> +}

Hm, this only addresses half the feedback I had in prior version [0]. Patch 2/9
with bpf_link with destructor looks good to me, but my feedback from back then was
that all the kprobe/uprobe/tracepoint/raw_tracepoint should be split API-wise, so
you'll end up with something like the below, that is, 1) a set of functions that
only /create/ the bpf_link handle /once/, and 2) a helper that allows /attaching/
progs to one or multiple bpf_links. The set of APIs would look like:

struct bpf_link *bpf_link__create_kprobe(bool retprobe, const char *func_name);
struct bpf_link *bpf_link__create_uprobe(bool retprobe, pid_t pid,
					 const char *binary_path,
					 size_t func_offset);
int bpf_program__attach_to_link(struct bpf_link *link, struct bpf_program *prog);
int bpf_link__destroy(struct bpf_link *link);

This seems much more natural to me. Right now you sort of do both in one single API.
Detangling the bpf_program__attach_{uprobe,kprobe}() would also avoid that you have
to redo all the perf_event_open_probe() work over and over in order to get the pfd
context where you can later attach something to. Given bpf_program__attach_to_link()
API, you also wouldn't need to expose the bpf_program__attach_perf_event() from
patch 3/9. Thoughts?

  [0] https://lore.kernel.org/bpf/a7780057-1d70-9ace-960b-ff65867dc277@iogearbox.net/

>  enum bpf_perf_event_ret
>  bpf_perf_event_read_simple(void *mmap_mem, size_t mmap_size, size_t page_size,
>  			   void **copy_mem, size_t *copy_size,
> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> index 1bf66c4a9330..bd767cc11967 100644
> --- a/tools/lib/bpf/libbpf.h
> +++ b/tools/lib/bpf/libbpf.h
> @@ -171,6 +171,13 @@ LIBBPF_API int bpf_link__destroy(struct bpf_link *link);
>  
>  LIBBPF_API struct bpf_link *
>  bpf_program__attach_perf_event(struct bpf_program *prog, int pfd);
> +LIBBPF_API struct bpf_link *
> +bpf_program__attach_kprobe(struct bpf_program *prog, bool retprobe,
> +			   const char *func_name);
> +LIBBPF_API struct bpf_link *
> +bpf_program__attach_uprobe(struct bpf_program *prog, bool retprobe,
> +			   pid_t pid, const char *binary_path,
> +			   size_t func_offset);
>  
>  struct bpf_insn;
>  
> diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> index 756f5aa802e9..57a40fb60718 100644
> --- a/tools/lib/bpf/libbpf.map
> +++ b/tools/lib/bpf/libbpf.map
> @@ -169,7 +169,9 @@ LIBBPF_0.0.4 {
>  	global:
>  		bpf_link__destroy;
>  		bpf_object__load_xattr;
> +		bpf_program__attach_kprobe;
>  		bpf_program__attach_perf_event;
> +		bpf_program__attach_uprobe;
>  		btf_dump__dump_type;
>  		btf_dump__free;
>  		btf_dump__new;
> 


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v5 bpf-next 4/9] libbpf: add kprobe/uprobe attach API
  2019-07-03 12:39   ` Daniel Borkmann
@ 2019-07-03 16:47     ` Andrii Nakryiko
  2019-07-04  0:57       ` Andrii Nakryiko
  0 siblings, 1 reply; 19+ messages in thread
From: Andrii Nakryiko @ 2019-07-03 16:47 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Kernel Team, Yonghong Song

On Wed, Jul 3, 2019 at 5:39 AM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On 07/02/2019 01:58 AM, Andrii Nakryiko wrote:
> > Add ability to attach to kernel and user probes and retprobes.
> > Implementation depends on perf event support for kprobes/uprobes.
> >
> > Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> > Reviewed-by: Stanislav Fomichev <sdf@google.com>
> > ---
> >  tools/lib/bpf/libbpf.c   | 169 +++++++++++++++++++++++++++++++++++++++
> >  tools/lib/bpf/libbpf.h   |   7 ++
> >  tools/lib/bpf/libbpf.map |   2 +
> >  3 files changed, 178 insertions(+)
> >
> > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > index bcaa294f819d..7b6142408b15 100644
> > --- a/tools/lib/bpf/libbpf.c
> > +++ b/tools/lib/bpf/libbpf.c
> > @@ -4021,6 +4021,175 @@ struct bpf_link *bpf_program__attach_perf_event(struct bpf_program *prog,
> >       return (struct bpf_link *)link;
> >  }
> >
> > +/*
> > + * this function is expected to parse integer in the range of [0, 2^31-1] from
> > + * given file using scanf format string fmt. If actual parsed value is
> > + * negative, the result might be indistinguishable from error
> > + */
> > +static int parse_uint_from_file(const char *file, const char *fmt)
> > +{
> > +     char buf[STRERR_BUFSIZE];
> > +     int err, ret;
> > +     FILE *f;
> > +
> > +     f = fopen(file, "r");
> > +     if (!f) {
> > +             err = -errno;
> > +             pr_debug("failed to open '%s': %s\n", file,
> > +                      libbpf_strerror_r(err, buf, sizeof(buf)));
> > +             return err;
> > +     }
> > +     err = fscanf(f, fmt, &ret);
> > +     if (err != 1) {
> > +             err = err == EOF ? -EIO : -errno;
> > +             pr_debug("failed to parse '%s': %s\n", file,
> > +                     libbpf_strerror_r(err, buf, sizeof(buf)));
> > +             fclose(f);
> > +             return err;
> > +     }
> > +     fclose(f);
> > +     return ret;
> > +}
> > +
> > +static int determine_kprobe_perf_type(void)
> > +{
> > +     const char *file = "/sys/bus/event_source/devices/kprobe/type";
> > +
> > +     return parse_uint_from_file(file, "%d\n");
> > +}
> > +
> > +static int determine_uprobe_perf_type(void)
> > +{
> > +     const char *file = "/sys/bus/event_source/devices/uprobe/type";
> > +
> > +     return parse_uint_from_file(file, "%d\n");
> > +}
> > +
> > +static int determine_kprobe_retprobe_bit(void)
> > +{
> > +     const char *file = "/sys/bus/event_source/devices/kprobe/format/retprobe";
> > +
> > +     return parse_uint_from_file(file, "config:%d\n");
> > +}
> > +
> > +static int determine_uprobe_retprobe_bit(void)
> > +{
> > +     const char *file = "/sys/bus/event_source/devices/uprobe/format/retprobe";
> > +
> > +     return parse_uint_from_file(file, "config:%d\n");
> > +}
> > +
> > +static int perf_event_open_probe(bool uprobe, bool retprobe, const char *name,
> > +                              uint64_t offset, int pid)
> > +{
> > +     struct perf_event_attr attr = {};
> > +     char errmsg[STRERR_BUFSIZE];
> > +     int type, pfd, err;
> > +
> > +     type = uprobe ? determine_uprobe_perf_type()
> > +                   : determine_kprobe_perf_type();
> > +     if (type < 0) {
> > +             pr_warning("failed to determine %s perf type: %s\n",
> > +                        uprobe ? "uprobe" : "kprobe",
> > +                        libbpf_strerror_r(type, errmsg, sizeof(errmsg)));
> > +             return type;
> > +     }
> > +     if (retprobe) {
> > +             int bit = uprobe ? determine_uprobe_retprobe_bit()
> > +                              : determine_kprobe_retprobe_bit();
> > +
> > +             if (bit < 0) {
> > +                     pr_warning("failed to determine %s retprobe bit: %s\n",
> > +                                uprobe ? "uprobe" : "kprobe",
> > +                                libbpf_strerror_r(bit, errmsg,
> > +                                                  sizeof(errmsg)));
> > +                     return bit;
> > +             }
> > +             attr.config |= 1 << bit;
> > +     }
> > +     attr.size = sizeof(attr);
> > +     attr.type = type;
> > +     attr.config1 = (uint64_t)(void *)name; /* kprobe_func or uprobe_path */
> > +     attr.config2 = offset;                 /* kprobe_addr or probe_offset */
> > +
> > +     /* pid filter is meaningful only for uprobes */
> > +     pfd = syscall(__NR_perf_event_open, &attr,
> > +                   pid < 0 ? -1 : pid /* pid */,
> > +                   pid == -1 ? 0 : -1 /* cpu */,
> > +                   -1 /* group_fd */, PERF_FLAG_FD_CLOEXEC);
> > +     if (pfd < 0) {
> > +             err = -errno;
> > +             pr_warning("%s perf_event_open() failed: %s\n",
> > +                        uprobe ? "uprobe" : "kprobe",
> > +                        libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
> > +             return err;
> > +     }
> > +     return pfd;
> > +}
> > +
> > +struct bpf_link *bpf_program__attach_kprobe(struct bpf_program *prog,
> > +                                         bool retprobe,
> > +                                         const char *func_name)
> > +{
> > +     char errmsg[STRERR_BUFSIZE];
> > +     struct bpf_link *link;
> > +     int pfd, err;
> > +
> > +     pfd = perf_event_open_probe(false /* uprobe */, retprobe, func_name,
> > +                                 0 /* offset */, -1 /* pid */);
> > +     if (pfd < 0) {
> > +             pr_warning("program '%s': failed to create %s '%s' perf event: %s\n",
> > +                        bpf_program__title(prog, false),
> > +                        retprobe ? "kretprobe" : "kprobe", func_name,
> > +                        libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
> > +             return ERR_PTR(pfd);
> > +     }
> > +     link = bpf_program__attach_perf_event(prog, pfd);
> > +     if (IS_ERR(link)) {
> > +             close(pfd);
> > +             err = PTR_ERR(link);
> > +             pr_warning("program '%s': failed to attach to %s '%s': %s\n",
> > +                        bpf_program__title(prog, false),
> > +                        retprobe ? "kretprobe" : "kprobe", func_name,
> > +                        libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
> > +             return link;
> > +     }
> > +     return link;
> > +}
> > +
> > +struct bpf_link *bpf_program__attach_uprobe(struct bpf_program *prog,
> > +                                         bool retprobe, pid_t pid,
> > +                                         const char *binary_path,
> > +                                         size_t func_offset)
> > +{
> > +     char errmsg[STRERR_BUFSIZE];
> > +     struct bpf_link *link;
> > +     int pfd, err;
> > +
> > +     pfd = perf_event_open_probe(true /* uprobe */, retprobe,
> > +                                 binary_path, func_offset, pid);
> > +     if (pfd < 0) {
> > +             pr_warning("program '%s': failed to create %s '%s:0x%zx' perf event: %s\n",
> > +                        bpf_program__title(prog, false),
> > +                        retprobe ? "uretprobe" : "uprobe",
> > +                        binary_path, func_offset,
> > +                        libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
> > +             return ERR_PTR(pfd);
> > +     }
> > +     link = bpf_program__attach_perf_event(prog, pfd);
> > +     if (IS_ERR(link)) {
> > +             close(pfd);
> > +             err = PTR_ERR(link);
> > +             pr_warning("program '%s': failed to attach to %s '%s:0x%zx': %s\n",
> > +                        bpf_program__title(prog, false),
> > +                        retprobe ? "uretprobe" : "uprobe",
> > +                        binary_path, func_offset,
> > +                        libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
> > +             return link;
> > +     }
> > +     return link;
> > +}
>
> Hm, this only addresses half the feedback I had in prior version [0]. Patch 2/9

Hi Daniel,

Yes, and I explained why in reply to your original email, please see [1].

I've started with exactly separation you wanted, but it turned out to
be cumbersome and harder to use user API, while also somewhat more
complicated to implement. Mostly because in that design bpf_link
exists in two states: created-but-not-attached and attached. Which
forces user to do additional clean ups if creation succeeded, but
attachment failed. It also makes it a bit harder to provide good
contextual error logging if something goes wrong, because not all
original parameters are preserved, as some of them might be needed
only for creation, but not attachment (or we'll have to allocate and
copy extra stuff just for logging purposes).

On the other hand, having separate generic attach_event method doesn't
help that much, as there is little common functionality to reuse
across all kinds of possible bpf_link types.


  [1] https://lore.kernel.org/bpf/20190621045555.4152743-4-andriin@fb.com/T/#m6cfc141e7b57970bc948134bf671a46972b95134

> with bpf_link with destructor looks good to me, but my feedback from back then was
> that all the kprobe/uprobe/tracepoint/raw_tracepoint should be split API-wise, so
> you'll end up with something like the below, that is, 1) a set of functions that
> only /create/ the bpf_link handle /once/, and 2) a helper that allows /attaching/
> progs to one or multiple bpf_links. The set of APIs would look like:
>
> struct bpf_link *bpf_link__create_kprobe(bool retprobe, const char *func_name);
> struct bpf_link *bpf_link__create_uprobe(bool retprobe, pid_t pid,
>                                          const char *binary_path,
>                                          size_t func_offset);
> int bpf_program__attach_to_link(struct bpf_link *link, struct bpf_program *prog);
> int bpf_link__destroy(struct bpf_link *link);
>
> This seems much more natural to me. Right now you sort of do both in one single API.

It felt that way for me as well, until I implemented it and used it in
selftests. And then it felt unnecessarily verbose without giving any
benefit. I still have a local patchset with that change, I can post it
as RFC, if you don't trust my judgement. Please let me know.

> Detangling the bpf_program__attach_{uprobe,kprobe}() would also avoid that you have
> to redo all the perf_event_open_probe() work over and over in order to get the pfd

What do you mean by "redo all the perf_event_open_probe work"? In
terms of code, I just reuse the same function, so there is no
duplicate code. And in either design you'll have to open that
perf_event, so that work will have to be done one way or another.

> context where you can later attach something to. Given bpf_program__attach_to_link()
> API, you also wouldn't need to expose the bpf_program__attach_perf_event() from

I'd expose attach_perf_event either way, it's high-level API I want to
provide, we have use cases where user is creating some specific
non-kprobe/non-tracepoint perf events and wants to attach to it. E.g.,
HW counter overflow events for CPU profilers. So that API is not some
kind of leaked abstraction, it's something I want to have anyway.


> patch 3/9. Thoughts?

I believe this hybrid approach provides better usability without
compromising anything. The only theoretical benefit of complete
separation of bpf_link creation and attachment is that user code would
be able to separate those two steps code organization-wise. But it's
easily doable through custom application code (just encapsulate all
the parameters and type of attachment and pass it around until you
actually need to attach), but I don't think it's necessary in practice
(so far I never needed anything like that).

Hope I convinced you that while elegant, it's not that practical. Also
hybrid approach isn't inelegant either and doesn't produce code
duplication (it actually eliminates some unnecessary allocations,
e.g., for storing tp_name for raw_tracepoint attach) :)

>
>   [0] https://lore.kernel.org/bpf/a7780057-1d70-9ace-960b-ff65867dc277@iogearbox.net/
>
> >  enum bpf_perf_event_ret
> >  bpf_perf_event_read_simple(void *mmap_mem, size_t mmap_size, size_t page_size,
> >                          void **copy_mem, size_t *copy_size,
> > diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> > index 1bf66c4a9330..bd767cc11967 100644
> > --- a/tools/lib/bpf/libbpf.h
> > +++ b/tools/lib/bpf/libbpf.h
> > @@ -171,6 +171,13 @@ LIBBPF_API int bpf_link__destroy(struct bpf_link *link);
> >
> >  LIBBPF_API struct bpf_link *
> >  bpf_program__attach_perf_event(struct bpf_program *prog, int pfd);
> > +LIBBPF_API struct bpf_link *
> > +bpf_program__attach_kprobe(struct bpf_program *prog, bool retprobe,
> > +                        const char *func_name);
> > +LIBBPF_API struct bpf_link *
> > +bpf_program__attach_uprobe(struct bpf_program *prog, bool retprobe,
> > +                        pid_t pid, const char *binary_path,
> > +                        size_t func_offset);
> >
> >  struct bpf_insn;
> >
> > diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> > index 756f5aa802e9..57a40fb60718 100644
> > --- a/tools/lib/bpf/libbpf.map
> > +++ b/tools/lib/bpf/libbpf.map
> > @@ -169,7 +169,9 @@ LIBBPF_0.0.4 {
> >       global:
> >               bpf_link__destroy;
> >               bpf_object__load_xattr;
> > +             bpf_program__attach_kprobe;
> >               bpf_program__attach_perf_event;
> > +             bpf_program__attach_uprobe;
> >               btf_dump__dump_type;
> >               btf_dump__free;
> >               btf_dump__new;
> >
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v5 bpf-next 4/9] libbpf: add kprobe/uprobe attach API
  2019-07-03 16:47     ` Andrii Nakryiko
@ 2019-07-04  0:57       ` Andrii Nakryiko
  2019-07-05 20:46         ` Daniel Borkmann
  0 siblings, 1 reply; 19+ messages in thread
From: Andrii Nakryiko @ 2019-07-04  0:57 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Kernel Team, Yonghong Song

On Wed, Jul 3, 2019 at 9:47 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Wed, Jul 3, 2019 at 5:39 AM Daniel Borkmann <daniel@iogearbox.net> wrote:
> >
> > On 07/02/2019 01:58 AM, Andrii Nakryiko wrote:
> > > Add ability to attach to kernel and user probes and retprobes.
> > > Implementation depends on perf event support for kprobes/uprobes.
> > >
> > > Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> > > Reviewed-by: Stanislav Fomichev <sdf@google.com>
> > > ---

<snip>

> > > +}
> >
> > Hm, this only addresses half the feedback I had in prior version [0]. Patch 2/9
>
> Hi Daniel,
>
> Yes, and I explained why in reply to your original email, please see [1].
>
> I've started with exactly separation you wanted, but it turned out to
> be cumbersome and harder to use user API, while also somewhat more
> complicated to implement. Mostly because in that design bpf_link
> exists in two states: created-but-not-attached and attached. Which
> forces user to do additional clean ups if creation succeeded, but
> attachment failed. It also makes it a bit harder to provide good
> contextual error logging if something goes wrong, because not all
> original parameters are preserved, as some of them might be needed
> only for creation, but not attachment (or we'll have to allocate and
> copy extra stuff just for logging purposes).
>
> On the other hand, having separate generic attach_event method doesn't
> help that much, as there is little common functionality to reuse
> across all kinds of possible bpf_link types.
>
>
>   [1] https://lore.kernel.org/bpf/20190621045555.4152743-4-andriin@fb.com/T/#m6cfc141e7b57970bc948134bf671a46972b95134
>
> > with bpf_link with destructor looks good to me, but my feedback from back then was
> > that all the kprobe/uprobe/tracepoint/raw_tracepoint should be split API-wise, so
> > you'll end up with something like the below, that is, 1) a set of functions that
> > only /create/ the bpf_link handle /once/, and 2) a helper that allows /attaching/
> > progs to one or multiple bpf_links. The set of APIs would look like:
> >
> > struct bpf_link *bpf_link__create_kprobe(bool retprobe, const char *func_name);
> > struct bpf_link *bpf_link__create_uprobe(bool retprobe, pid_t pid,
> >                                          const char *binary_path,
> >                                          size_t func_offset);
> > int bpf_program__attach_to_link(struct bpf_link *link, struct bpf_program *prog);
> > int bpf_link__destroy(struct bpf_link *link);
> >
> > This seems much more natural to me. Right now you sort of do both in one single API.
>
> It felt that way for me as well, until I implemented it and used it in
> selftests. And then it felt unnecessarily verbose without giving any
> benefit. I still have a local patchset with that change, I can post it
> as RFC, if you don't trust my judgement. Please let me know.
>
> > Detangling the bpf_program__attach_{uprobe,kprobe}() would also avoid that you have
> > to redo all the perf_event_open_probe() work over and over in order to get the pfd

So re-reading this again, I wonder if you meant that with separate
bpf_link (or rather bpf_hook in that case) creation and attachment
operations, one would be able to create single bpf_hook for same
kprobe and then attach multiple BPF programs to that single pfd
representing that specific probe.

If that's how I should have read it, I agree that it probably would be
possible for some types of hooks, but not for every type of hook. But
furthermore, how often in practice same application attaches many
different BPF programs to the same hook? And it's also hard to imagine
that hook creation (i.e., creating such FD for BPF hook), would ever
be a bottleneck.

So I still think it's not a strong reason to go with API that's harder
to use for typical use cases just because of hypothetical benefits in
some extreme cases.

>
> What do you mean by "redo all the perf_event_open_probe work"? In
> terms of code, I just reuse the same function, so there is no
> duplicate code. And in either design you'll have to open that
> perf_event, so that work will have to be done one way or another.
>
> > context where you can later attach something to. Given bpf_program__attach_to_link()
> > API, you also wouldn't need to expose the bpf_program__attach_perf_event() from
>
> I'd expose attach_perf_event either way, it's high-level API I want to
> provide, we have use cases where user is creating some specific
> non-kprobe/non-tracepoint perf events and wants to attach to it. E.g.,
> HW counter overflow events for CPU profilers. So that API is not some
> kind of leaked abstraction, it's something I want to have anyway.
>
>
> > patch 3/9. Thoughts?
>
> I believe this hybrid approach provides better usability without
> compromising anything. The only theoretical benefit of complete
> separation of bpf_link creation and attachment is that user code would
> be able to separate those two steps code organization-wise. But it's
> easily doable through custom application code (just encapsulate all
> the parameters and type of attachment and pass it around until you
> actually need to attach), but I don't think it's necessary in practice
> (so far I never needed anything like that).
>
> Hope I convinced you that while elegant, it's not that practical. Also
> hybrid approach isn't inelegant either and doesn't produce code
> duplication (it actually eliminates some unnecessary allocations,
> e.g., for storing tp_name for raw_tracepoint attach) :)
>
> >
> >   [0] https://lore.kernel.org/bpf/a7780057-1d70-9ace-960b-ff65867dc277@iogearbox.net/
> >
> > >  enum bpf_perf_event_ret
> > >  bpf_perf_event_read_simple(void *mmap_mem, size_t mmap_size, size_t page_size,
> > >                          void **copy_mem, size_t *copy_size,
> > > diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> > > index 1bf66c4a9330..bd767cc11967 100644
> > > --- a/tools/lib/bpf/libbpf.h
> > > +++ b/tools/lib/bpf/libbpf.h
> > > @@ -171,6 +171,13 @@ LIBBPF_API int bpf_link__destroy(struct bpf_link *link);
> > >
> > >  LIBBPF_API struct bpf_link *
> > >  bpf_program__attach_perf_event(struct bpf_program *prog, int pfd);
> > > +LIBBPF_API struct bpf_link *
> > > +bpf_program__attach_kprobe(struct bpf_program *prog, bool retprobe,
> > > +                        const char *func_name);
> > > +LIBBPF_API struct bpf_link *
> > > +bpf_program__attach_uprobe(struct bpf_program *prog, bool retprobe,
> > > +                        pid_t pid, const char *binary_path,
> > > +                        size_t func_offset);
> > >
> > >  struct bpf_insn;
> > >
> > > diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> > > index 756f5aa802e9..57a40fb60718 100644
> > > --- a/tools/lib/bpf/libbpf.map
> > > +++ b/tools/lib/bpf/libbpf.map
> > > @@ -169,7 +169,9 @@ LIBBPF_0.0.4 {
> > >       global:
> > >               bpf_link__destroy;
> > >               bpf_object__load_xattr;
> > > +             bpf_program__attach_kprobe;
> > >               bpf_program__attach_perf_event;
> > > +             bpf_program__attach_uprobe;
> > >               btf_dump__dump_type;
> > >               btf_dump__free;
> > >               btf_dump__new;
> > >
> >

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v5 bpf-next 4/9] libbpf: add kprobe/uprobe attach API
  2019-07-04  0:57       ` Andrii Nakryiko
@ 2019-07-05 20:46         ` Daniel Borkmann
  2019-07-06  3:48           ` Andrii Nakryiko
  0 siblings, 1 reply; 19+ messages in thread
From: Daniel Borkmann @ 2019-07-05 20:46 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Kernel Team, Yonghong Song

On 07/04/2019 02:57 AM, Andrii Nakryiko wrote:
> On Wed, Jul 3, 2019 at 9:47 AM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
[...]
>>   [1] https://lore.kernel.org/bpf/20190621045555.4152743-4-andriin@fb.com/T/#m6cfc141e7b57970bc948134bf671a46972b95134
>>
>>> with bpf_link with destructor looks good to me, but my feedback from back then was
>>> that all the kprobe/uprobe/tracepoint/raw_tracepoint should be split API-wise, so
>>> you'll end up with something like the below, that is, 1) a set of functions that
>>> only /create/ the bpf_link handle /once/, and 2) a helper that allows /attaching/
>>> progs to one or multiple bpf_links. The set of APIs would look like:
>>>
>>> struct bpf_link *bpf_link__create_kprobe(bool retprobe, const char *func_name);
>>> struct bpf_link *bpf_link__create_uprobe(bool retprobe, pid_t pid,
>>>                                          const char *binary_path,
>>>                                          size_t func_offset);
>>> int bpf_program__attach_to_link(struct bpf_link *link, struct bpf_program *prog);
>>> int bpf_link__destroy(struct bpf_link *link);
>>>
>>> This seems much more natural to me. Right now you sort of do both in one single API.
>>
>> It felt that way for me as well, until I implemented it and used it in
>> selftests. And then it felt unnecessarily verbose without giving any
>> benefit. I still have a local patchset with that change, I can post it
>> as RFC, if you don't trust my judgement. Please let me know.
>>
>>> Detangling the bpf_program__attach_{uprobe,kprobe}() would also avoid that you have
>>> to redo all the perf_event_open_probe() work over and over in order to get the pfd
> 
> So re-reading this again, I wonder if you meant that with separate
> bpf_link (or rather bpf_hook in that case) creation and attachment
> operations, one would be able to create single bpf_hook for same
> kprobe and then attach multiple BPF programs to that single pfd
> representing that specific probe.
> 
> If that's how I should have read it, I agree that it probably would be
> possible for some types of hooks, but not for every type of hook. But
> furthermore, how often in practice same application attaches many
> different BPF programs to the same hook? And it's also hard to imagine
> that hook creation (i.e., creating such FD for BPF hook), would ever
> be a bottleneck.
> 
> So I still think it's not a strong reason to go with API that's harder
> to use for typical use cases just because of hypothetical benefits in
> some extreme cases.

Was thinking along that lines, yes, as we run over an array of BPF progs,
but I just double checked the kernel code again and the relationship of
a BPF prog to perf_event is really just 1:1, just that the backing tp_event
(trace_event_call) contains the shared array. Given that, all makes sense
and there is no point in splitting. Therefore, applied, thanks!

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v5 bpf-next 4/9] libbpf: add kprobe/uprobe attach API
  2019-07-05 20:46         ` Daniel Borkmann
@ 2019-07-06  3:48           ` Andrii Nakryiko
  0 siblings, 0 replies; 19+ messages in thread
From: Andrii Nakryiko @ 2019-07-06  3:48 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Kernel Team, Yonghong Song

On Fri, Jul 5, 2019 at 1:46 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On 07/04/2019 02:57 AM, Andrii Nakryiko wrote:
> > On Wed, Jul 3, 2019 at 9:47 AM Andrii Nakryiko
> > <andrii.nakryiko@gmail.com> wrote:
> [...]
> >>   [1] https://lore.kernel.org/bpf/20190621045555.4152743-4-andriin@fb.com/T/#m6cfc141e7b57970bc948134bf671a46972b95134
> >>
> >>> with bpf_link with destructor looks good to me, but my feedback from back then was
> >>> that all the kprobe/uprobe/tracepoint/raw_tracepoint should be split API-wise, so
> >>> you'll end up with something like the below, that is, 1) a set of functions that
> >>> only /create/ the bpf_link handle /once/, and 2) a helper that allows /attaching/
> >>> progs to one or multiple bpf_links. The set of APIs would look like:
> >>>
> >>> struct bpf_link *bpf_link__create_kprobe(bool retprobe, const char *func_name);
> >>> struct bpf_link *bpf_link__create_uprobe(bool retprobe, pid_t pid,
> >>>                                          const char *binary_path,
> >>>                                          size_t func_offset);
> >>> int bpf_program__attach_to_link(struct bpf_link *link, struct bpf_program *prog);
> >>> int bpf_link__destroy(struct bpf_link *link);
> >>>
> >>> This seems much more natural to me. Right now you sort of do both in one single API.
> >>
> >> It felt that way for me as well, until I implemented it and used it in
> >> selftests. And then it felt unnecessarily verbose without giving any
> >> benefit. I still have a local patchset with that change, I can post it
> >> as RFC, if you don't trust my judgement. Please let me know.
> >>
> >>> Detangling the bpf_program__attach_{uprobe,kprobe}() would also avoid that you have
> >>> to redo all the perf_event_open_probe() work over and over in order to get the pfd
> >
> > So re-reading this again, I wonder if you meant that with separate
> > bpf_link (or rather bpf_hook in that case) creation and attachment
> > operations, one would be able to create single bpf_hook for same
> > kprobe and then attach multiple BPF programs to that single pfd
> > representing that specific probe.
> >
> > If that's how I should have read it, I agree that it probably would be
> > possible for some types of hooks, but not for every type of hook. But
> > furthermore, how often in practice same application attaches many
> > different BPF programs to the same hook? And it's also hard to imagine
> > that hook creation (i.e., creating such FD for BPF hook), would ever
> > be a bottleneck.
> >
> > So I still think it's not a strong reason to go with API that's harder
> > to use for typical use cases just because of hypothetical benefits in
> > some extreme cases.
>
> Was thinking along that lines, yes, as we run over an array of BPF progs,
> but I just double checked the kernel code again and the relationship of
> a BPF prog to perf_event is really just 1:1, just that the backing tp_event
> (trace_event_call) contains the shared array. Given that, all makes sense
> and there is no point in splitting. Therefore, applied, thanks!

Great, thanks a lot!

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v5 bpf-next 8/9] selftests/bpf: add kprobe/uprobe selftests
  2019-07-01 23:59 ` [PATCH v5 bpf-next 8/9] selftests/bpf: add kprobe/uprobe selftests Andrii Nakryiko
@ 2019-07-06 17:21   ` Yonghong Song
  2019-07-06 17:42     ` Andrii Nakryiko
  0 siblings, 1 reply; 19+ messages in thread
From: Yonghong Song @ 2019-07-06 17:21 UTC (permalink / raw)
  To: Andrii Nakryiko, andrii.nakryiko, bpf, netdev,
	Alexei Starovoitov, daniel, Kernel Team



On 7/1/19 4:59 PM, Andrii Nakryiko wrote:
> Add tests verifying kprobe/kretprobe/uprobe/uretprobe APIs work as
> expected.
> 
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> Reviewed-by: Stanislav Fomichev <sdf@google.com>
> Acked-by: Song Liu <songliubraving@fb.com>
> ---
>   .../selftests/bpf/prog_tests/attach_probe.c   | 166 ++++++++++++++++++
>   .../selftests/bpf/progs/test_attach_probe.c   |  55 ++++++
>   2 files changed, 221 insertions(+)
>   create mode 100644 tools/testing/selftests/bpf/prog_tests/attach_probe.c
>   create mode 100644 tools/testing/selftests/bpf/progs/test_attach_probe.c
> 
> diff --git a/tools/testing/selftests/bpf/prog_tests/attach_probe.c b/tools/testing/selftests/bpf/prog_tests/attach_probe.c
> new file mode 100644
> index 000000000000..a4686395522c
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/attach_probe.c
> @@ -0,0 +1,166 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#include <test_progs.h>
> +
> +ssize_t get_base_addr() {
> +	size_t start;
> +	char buf[256];
> +	FILE *f;
> +
> +	f = fopen("/proc/self/maps", "r");
> +	if (!f)
> +		return -errno;
> +
> +	while (fscanf(f, "%zx-%*x %s %*s\n", &start, buf) == 2) {
> +		if (strcmp(buf, "r-xp") == 0) {
> +			fclose(f);
> +			return start;
> +		}
> +	}
> +
> +	fclose(f);
> +	return -EINVAL;
> +}
> +
> +#ifdef __x86_64__
> +#define SYS_KPROBE_NAME "__x64_sys_nanosleep"
> +#else
> +#define SYS_KPROBE_NAME "sys_nanosleep"
> +#endif
> +
> +void test_attach_probe(void)
> +{
> +	const char *kprobe_name = "kprobe/sys_nanosleep";
> +	const char *kretprobe_name = "kretprobe/sys_nanosleep";
> +	const char *uprobe_name = "uprobe/trigger_func";
> +	const char *uretprobe_name = "uretprobe/trigger_func";
> +	const int kprobe_idx = 0, kretprobe_idx = 1;
> +	const int uprobe_idx = 2, uretprobe_idx = 3;
> +	const char *file = "./test_attach_probe.o";
> +	struct bpf_program *kprobe_prog, *kretprobe_prog;
> +	struct bpf_program *uprobe_prog, *uretprobe_prog;
> +	struct bpf_object *obj;
> +	int err, prog_fd, duration = 0, res;
> +	struct bpf_link *kprobe_link = NULL;
> +	struct bpf_link *kretprobe_link = NULL;
> +	struct bpf_link *uprobe_link = NULL;
> +	struct bpf_link *uretprobe_link = NULL;
> +	int results_map_fd;
> +	size_t uprobe_offset;
> +	ssize_t base_addr;
> +
> +	base_addr = get_base_addr();
> +	if (CHECK(base_addr < 0, "get_base_addr",
> +		  "failed to find base addr: %zd", base_addr))
> +		return;
> +	uprobe_offset = (size_t)&get_base_addr - base_addr;
> +
> +	/* load programs */
> +	err = bpf_prog_load(file, BPF_PROG_TYPE_KPROBE, &obj, &prog_fd);
> +	if (CHECK(err, "obj_load", "err %d errno %d\n", err, errno))
> +		return;
> +
> +	kprobe_prog = bpf_object__find_program_by_title(obj, kprobe_name);
> +	if (CHECK(!kprobe_prog, "find_probe",
> +		  "prog '%s' not found\n", kprobe_name))
> +		goto cleanup;
> +	kretprobe_prog = bpf_object__find_program_by_title(obj, kretprobe_name);
> +	if (CHECK(!kretprobe_prog, "find_probe",
> +		  "prog '%s' not found\n", kretprobe_name))
> +		goto cleanup;
> +	uprobe_prog = bpf_object__find_program_by_title(obj, uprobe_name);
> +	if (CHECK(!uprobe_prog, "find_probe",
> +		  "prog '%s' not found\n", uprobe_name))
> +		goto cleanup;
> +	uretprobe_prog = bpf_object__find_program_by_title(obj, uretprobe_name);
> +	if (CHECK(!uretprobe_prog, "find_probe",
> +		  "prog '%s' not found\n", uretprobe_name))
> +		goto cleanup;
> +
> +	/* load maps */
> +	results_map_fd = bpf_find_map(__func__, obj, "results_map");
> +	if (CHECK(results_map_fd < 0, "find_results_map",
> +		  "err %d\n", results_map_fd))
> +		goto cleanup;
> +
> +	kprobe_link = bpf_program__attach_kprobe(kprobe_prog,
> +						 false /* retprobe */,
> +						 SYS_KPROBE_NAME);
> +	if (CHECK(IS_ERR(kprobe_link), "attach_kprobe",
> +		  "err %ld\n", PTR_ERR(kprobe_link))) {
> +		kprobe_link = NULL;
> +		goto cleanup;
> +	}
> +	kretprobe_link = bpf_program__attach_kprobe(kretprobe_prog,
> +						    true /* retprobe */,
> +						    SYS_KPROBE_NAME);
> +	if (CHECK(IS_ERR(kretprobe_link), "attach_kretprobe",
> +		  "err %ld\n", PTR_ERR(kretprobe_link))) {
> +		kretprobe_link = NULL;
> +		goto cleanup;
> +	}
> +	uprobe_link = bpf_program__attach_uprobe(uprobe_prog,
> +						 false /* retprobe */,
> +						 0 /* self pid */,
> +						 "/proc/self/exe",
> +						 uprobe_offset);
> +	if (CHECK(IS_ERR(uprobe_link), "attach_uprobe",
> +		  "err %ld\n", PTR_ERR(uprobe_link))) {
> +		uprobe_link = NULL;
> +		goto cleanup;
> +	}
> +	uretprobe_link = bpf_program__attach_uprobe(uretprobe_prog,
> +						    true /* retprobe */,
> +						    -1 /* any pid */,
> +						    "/proc/self/exe",
> +						    uprobe_offset);
> +	if (CHECK(IS_ERR(uretprobe_link), "attach_uretprobe",
> +		  "err %ld\n", PTR_ERR(uretprobe_link))) {
> +		uretprobe_link = NULL;
> +		goto cleanup;
> +	}
> +
> +	/* trigger & validate kprobe && kretprobe */
> +	usleep(1);
> +
> +	err = bpf_map_lookup_elem(results_map_fd, &kprobe_idx, &res);
> +	if (CHECK(err, "get_kprobe_res",
> +		  "failed to get kprobe res: %d\n", err))
> +		goto cleanup;
> +	if (CHECK(res != kprobe_idx + 1, "check_kprobe_res",
> +		  "wrong kprobe res: %d\n", res))
> +		goto cleanup;
> +
> +	err = bpf_map_lookup_elem(results_map_fd, &kretprobe_idx, &res);
> +	if (CHECK(err, "get_kretprobe_res",
> +		  "failed to get kretprobe res: %d\n", err))
> +		goto cleanup;
> +	if (CHECK(res != kretprobe_idx + 1, "check_kretprobe_res",
> +		  "wrong kretprobe res: %d\n", res))
> +		goto cleanup;
> +
> +	/* trigger & validate uprobe & uretprobe */
> +	get_base_addr();
> +
> +	err = bpf_map_lookup_elem(results_map_fd, &uprobe_idx, &res);
> +	if (CHECK(err, "get_uprobe_res",
> +		  "failed to get uprobe res: %d\n", err))
> +		goto cleanup;
> +	if (CHECK(res != uprobe_idx + 1, "check_uprobe_res",
> +		  "wrong uprobe res: %d\n", res))
> +		goto cleanup;
> +
> +	err = bpf_map_lookup_elem(results_map_fd, &uretprobe_idx, &res);
> +	if (CHECK(err, "get_uretprobe_res",
> +		  "failed to get uretprobe res: %d\n", err))
> +		goto cleanup;
> +	if (CHECK(res != uretprobe_idx + 1, "check_uretprobe_res",
> +		  "wrong uretprobe res: %d\n", res))
> +		goto cleanup;
> +
> +cleanup:
> +	bpf_link__destroy(kprobe_link);
> +	bpf_link__destroy(kretprobe_link);
> +	bpf_link__destroy(uprobe_link);
> +	bpf_link__destroy(uretprobe_link);
> +	bpf_object__close(obj);
> +}
> diff --git a/tools/testing/selftests/bpf/progs/test_attach_probe.c b/tools/testing/selftests/bpf/progs/test_attach_probe.c
> new file mode 100644
> index 000000000000..7a7c5cd728c8
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/test_attach_probe.c
> @@ -0,0 +1,55 @@
> +// SPDX-License-Identifier: GPL-2.0
> +// Copyright (c) 2017 Facebook
> +
> +#include <linux/ptrace.h>
> +#include <linux/bpf.h>
> +#include "bpf_helpers.h"
> +
> +struct {
> +	int type;
> +	int max_entries;
> +	int *key;
> +	int *value;
> +} results_map SEC(".maps") = {
> +	.type = BPF_MAP_TYPE_ARRAY,
> +	.max_entries = 4,
> +};

After the new .maps convention patch is merged, test_progs is broken due 
to this. The above .maps definition needs to be updated to

struct {
	__uint(type, BPF_MAP_TYPE_ARRAY);
	__uint(max_entries, 4);
	__type(key, int);
	__type(value, int);
} results_map SEC(".maps");

> +
> +SEC("kprobe/sys_nanosleep")
> +int handle_sys_nanosleep_entry(struct pt_regs *ctx)
> +{
> +	const int key = 0, value = 1;
> +
> +	bpf_map_update_elem(&results_map, &key, &value, 0);
> +	return 0;
> +}
> +
> +SEC("kretprobe/sys_nanosleep")
> +int handle_sys_getpid_return(struct pt_regs *ctx)
> +{
> +	const int key = 1, value = 2;
> +
> +	bpf_map_update_elem(&results_map, &key, &value, 0);
> +	return 0;
> +}
> +
> +SEC("uprobe/trigger_func")
> +int handle_uprobe_entry(struct pt_regs *ctx)
> +{
> +	const int key = 2, value = 3;
> +
> +	bpf_map_update_elem(&results_map, &key, &value, 0);
> +	return 0;
> +}
> +
> +SEC("uretprobe/trigger_func")
> +int handle_uprobe_return(struct pt_regs *ctx)
> +{
> +	const int key = 3, value = 4;
> +
> +	bpf_map_update_elem(&results_map, &key, &value, 0);
> +	return 0;
> +}
> +
> +char _license[] SEC("license") = "GPL";
> +__u32 _version SEC("version") = 1;
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v5 bpf-next 8/9] selftests/bpf: add kprobe/uprobe selftests
  2019-07-06 17:21   ` Yonghong Song
@ 2019-07-06 17:42     ` Andrii Nakryiko
  2019-07-06 17:48       ` Yonghong Song
  0 siblings, 1 reply; 19+ messages in thread
From: Andrii Nakryiko @ 2019-07-06 17:42 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Andrii Nakryiko, bpf, netdev, Alexei Starovoitov, daniel, Kernel Team

On Sat, Jul 6, 2019 at 10:21 AM Yonghong Song <yhs@fb.com> wrote:
>
>
>
> On 7/1/19 4:59 PM, Andrii Nakryiko wrote:
> > Add tests verifying kprobe/kretprobe/uprobe/uretprobe APIs work as
> > expected.
> >
> > Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> > Reviewed-by: Stanislav Fomichev <sdf@google.com>
> > Acked-by: Song Liu <songliubraving@fb.com>
> > ---
> >   .../selftests/bpf/prog_tests/attach_probe.c   | 166 ++++++++++++++++++
> >   .../selftests/bpf/progs/test_attach_probe.c   |  55 ++++++
> >   2 files changed, 221 insertions(+)
> >   create mode 100644 tools/testing/selftests/bpf/prog_tests/attach_probe.c
> >   create mode 100644 tools/testing/selftests/bpf/progs/test_attach_probe.c
> >
> > diff --git a/tools/testing/selftests/bpf/prog_tests/attach_probe.c b/tools/testing/selftests/bpf/prog_tests/attach_probe.c
> > new file mode 100644
> > index 000000000000..a4686395522c
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/prog_tests/attach_probe.c
> > @@ -0,0 +1,166 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +#include <test_progs.h>
> > +
> > +ssize_t get_base_addr() {
> > +     size_t start;
> > +     char buf[256];
> > +     FILE *f;
> > +
> > +     f = fopen("/proc/self/maps", "r");
> > +     if (!f)
> > +             return -errno;
> > +
> > +     while (fscanf(f, "%zx-%*x %s %*s\n", &start, buf) == 2) {
> > +             if (strcmp(buf, "r-xp") == 0) {
> > +                     fclose(f);
> > +                     return start;
> > +             }
> > +     }
> > +
> > +     fclose(f);
> > +     return -EINVAL;
> > +}
> > +
> > +#ifdef __x86_64__
> > +#define SYS_KPROBE_NAME "__x64_sys_nanosleep"
> > +#else
> > +#define SYS_KPROBE_NAME "sys_nanosleep"
> > +#endif
> > +
> > +void test_attach_probe(void)
> > +{
> > +     const char *kprobe_name = "kprobe/sys_nanosleep";
> > +     const char *kretprobe_name = "kretprobe/sys_nanosleep";
> > +     const char *uprobe_name = "uprobe/trigger_func";
> > +     const char *uretprobe_name = "uretprobe/trigger_func";
> > +     const int kprobe_idx = 0, kretprobe_idx = 1;
> > +     const int uprobe_idx = 2, uretprobe_idx = 3;
> > +     const char *file = "./test_attach_probe.o";
> > +     struct bpf_program *kprobe_prog, *kretprobe_prog;
> > +     struct bpf_program *uprobe_prog, *uretprobe_prog;
> > +     struct bpf_object *obj;
> > +     int err, prog_fd, duration = 0, res;
> > +     struct bpf_link *kprobe_link = NULL;
> > +     struct bpf_link *kretprobe_link = NULL;
> > +     struct bpf_link *uprobe_link = NULL;
> > +     struct bpf_link *uretprobe_link = NULL;
> > +     int results_map_fd;
> > +     size_t uprobe_offset;
> > +     ssize_t base_addr;
> > +
> > +     base_addr = get_base_addr();
> > +     if (CHECK(base_addr < 0, "get_base_addr",
> > +               "failed to find base addr: %zd", base_addr))
> > +             return;
> > +     uprobe_offset = (size_t)&get_base_addr - base_addr;
> > +
> > +     /* load programs */
> > +     err = bpf_prog_load(file, BPF_PROG_TYPE_KPROBE, &obj, &prog_fd);
> > +     if (CHECK(err, "obj_load", "err %d errno %d\n", err, errno))
> > +             return;
> > +
> > +     kprobe_prog = bpf_object__find_program_by_title(obj, kprobe_name);
> > +     if (CHECK(!kprobe_prog, "find_probe",
> > +               "prog '%s' not found\n", kprobe_name))
> > +             goto cleanup;
> > +     kretprobe_prog = bpf_object__find_program_by_title(obj, kretprobe_name);
> > +     if (CHECK(!kretprobe_prog, "find_probe",
> > +               "prog '%s' not found\n", kretprobe_name))
> > +             goto cleanup;
> > +     uprobe_prog = bpf_object__find_program_by_title(obj, uprobe_name);
> > +     if (CHECK(!uprobe_prog, "find_probe",
> > +               "prog '%s' not found\n", uprobe_name))
> > +             goto cleanup;
> > +     uretprobe_prog = bpf_object__find_program_by_title(obj, uretprobe_name);
> > +     if (CHECK(!uretprobe_prog, "find_probe",
> > +               "prog '%s' not found\n", uretprobe_name))
> > +             goto cleanup;
> > +
> > +     /* load maps */
> > +     results_map_fd = bpf_find_map(__func__, obj, "results_map");
> > +     if (CHECK(results_map_fd < 0, "find_results_map",
> > +               "err %d\n", results_map_fd))
> > +             goto cleanup;
> > +
> > +     kprobe_link = bpf_program__attach_kprobe(kprobe_prog,
> > +                                              false /* retprobe */,
> > +                                              SYS_KPROBE_NAME);
> > +     if (CHECK(IS_ERR(kprobe_link), "attach_kprobe",
> > +               "err %ld\n", PTR_ERR(kprobe_link))) {
> > +             kprobe_link = NULL;
> > +             goto cleanup;
> > +     }
> > +     kretprobe_link = bpf_program__attach_kprobe(kretprobe_prog,
> > +                                                 true /* retprobe */,
> > +                                                 SYS_KPROBE_NAME);
> > +     if (CHECK(IS_ERR(kretprobe_link), "attach_kretprobe",
> > +               "err %ld\n", PTR_ERR(kretprobe_link))) {
> > +             kretprobe_link = NULL;
> > +             goto cleanup;
> > +     }
> > +     uprobe_link = bpf_program__attach_uprobe(uprobe_prog,
> > +                                              false /* retprobe */,
> > +                                              0 /* self pid */,
> > +                                              "/proc/self/exe",
> > +                                              uprobe_offset);
> > +     if (CHECK(IS_ERR(uprobe_link), "attach_uprobe",
> > +               "err %ld\n", PTR_ERR(uprobe_link))) {
> > +             uprobe_link = NULL;
> > +             goto cleanup;
> > +     }
> > +     uretprobe_link = bpf_program__attach_uprobe(uretprobe_prog,
> > +                                                 true /* retprobe */,
> > +                                                 -1 /* any pid */,
> > +                                                 "/proc/self/exe",
> > +                                                 uprobe_offset);
> > +     if (CHECK(IS_ERR(uretprobe_link), "attach_uretprobe",
> > +               "err %ld\n", PTR_ERR(uretprobe_link))) {
> > +             uretprobe_link = NULL;
> > +             goto cleanup;
> > +     }
> > +
> > +     /* trigger & validate kprobe && kretprobe */
> > +     usleep(1);
> > +
> > +     err = bpf_map_lookup_elem(results_map_fd, &kprobe_idx, &res);
> > +     if (CHECK(err, "get_kprobe_res",
> > +               "failed to get kprobe res: %d\n", err))
> > +             goto cleanup;
> > +     if (CHECK(res != kprobe_idx + 1, "check_kprobe_res",
> > +               "wrong kprobe res: %d\n", res))
> > +             goto cleanup;
> > +
> > +     err = bpf_map_lookup_elem(results_map_fd, &kretprobe_idx, &res);
> > +     if (CHECK(err, "get_kretprobe_res",
> > +               "failed to get kretprobe res: %d\n", err))
> > +             goto cleanup;
> > +     if (CHECK(res != kretprobe_idx + 1, "check_kretprobe_res",
> > +               "wrong kretprobe res: %d\n", res))
> > +             goto cleanup;
> > +
> > +     /* trigger & validate uprobe & uretprobe */
> > +     get_base_addr();
> > +
> > +     err = bpf_map_lookup_elem(results_map_fd, &uprobe_idx, &res);
> > +     if (CHECK(err, "get_uprobe_res",
> > +               "failed to get uprobe res: %d\n", err))
> > +             goto cleanup;
> > +     if (CHECK(res != uprobe_idx + 1, "check_uprobe_res",
> > +               "wrong uprobe res: %d\n", res))
> > +             goto cleanup;
> > +
> > +     err = bpf_map_lookup_elem(results_map_fd, &uretprobe_idx, &res);
> > +     if (CHECK(err, "get_uretprobe_res",
> > +               "failed to get uretprobe res: %d\n", err))
> > +             goto cleanup;
> > +     if (CHECK(res != uretprobe_idx + 1, "check_uretprobe_res",
> > +               "wrong uretprobe res: %d\n", res))
> > +             goto cleanup;
> > +
> > +cleanup:
> > +     bpf_link__destroy(kprobe_link);
> > +     bpf_link__destroy(kretprobe_link);
> > +     bpf_link__destroy(uprobe_link);
> > +     bpf_link__destroy(uretprobe_link);
> > +     bpf_object__close(obj);
> > +}
> > diff --git a/tools/testing/selftests/bpf/progs/test_attach_probe.c b/tools/testing/selftests/bpf/progs/test_attach_probe.c
> > new file mode 100644
> > index 000000000000..7a7c5cd728c8
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/progs/test_attach_probe.c
> > @@ -0,0 +1,55 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +// Copyright (c) 2017 Facebook
> > +
> > +#include <linux/ptrace.h>
> > +#include <linux/bpf.h>
> > +#include "bpf_helpers.h"
> > +
> > +struct {
> > +     int type;
> > +     int max_entries;
> > +     int *key;
> > +     int *value;
> > +} results_map SEC(".maps") = {
> > +     .type = BPF_MAP_TYPE_ARRAY,
> > +     .max_entries = 4,
> > +};
>
> After the new .maps convention patch is merged, test_progs is broken due
> to this. The above .maps definition needs to be updated to
>
> struct {
>         __uint(type, BPF_MAP_TYPE_ARRAY);
>         __uint(max_entries, 4);
>         __type(key, int);
>         __type(value, int);
> } results_map SEC(".maps");
>

Yep, noticed that yesterday. Fixed in [0].

  [0] https://patchwork.ozlabs.org/patch/1128383/

> > +
> > +SEC("kprobe/sys_nanosleep")
> > +int handle_sys_nanosleep_entry(struct pt_regs *ctx)
> > +{
> > +     const int key = 0, value = 1;
> > +
> > +     bpf_map_update_elem(&results_map, &key, &value, 0);
> > +     return 0;
> > +}
> > +
> > +SEC("kretprobe/sys_nanosleep")
> > +int handle_sys_getpid_return(struct pt_regs *ctx)
> > +{
> > +     const int key = 1, value = 2;
> > +
> > +     bpf_map_update_elem(&results_map, &key, &value, 0);
> > +     return 0;
> > +}
> > +
> > +SEC("uprobe/trigger_func")
> > +int handle_uprobe_entry(struct pt_regs *ctx)
> > +{
> > +     const int key = 2, value = 3;
> > +
> > +     bpf_map_update_elem(&results_map, &key, &value, 0);
> > +     return 0;
> > +}
> > +
> > +SEC("uretprobe/trigger_func")
> > +int handle_uprobe_return(struct pt_regs *ctx)
> > +{
> > +     const int key = 3, value = 4;
> > +
> > +     bpf_map_update_elem(&results_map, &key, &value, 0);
> > +     return 0;
> > +}
> > +
> > +char _license[] SEC("license") = "GPL";
> > +__u32 _version SEC("version") = 1;
> >

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v5 bpf-next 8/9] selftests/bpf: add kprobe/uprobe selftests
  2019-07-06 17:42     ` Andrii Nakryiko
@ 2019-07-06 17:48       ` Yonghong Song
  0 siblings, 0 replies; 19+ messages in thread
From: Yonghong Song @ 2019-07-06 17:48 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Andrii Nakryiko, bpf, netdev, Alexei Starovoitov, daniel, Kernel Team



On 7/6/19 10:42 AM, Andrii Nakryiko wrote:
> On Sat, Jul 6, 2019 at 10:21 AM Yonghong Song <yhs@fb.com> wrote:
>>
>>
>>
>> On 7/1/19 4:59 PM, Andrii Nakryiko wrote:
>>> Add tests verifying kprobe/kretprobe/uprobe/uretprobe APIs work as
>>> expected.
>>>
>>> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
>>> Reviewed-by: Stanislav Fomichev <sdf@google.com>
>>> Acked-by: Song Liu <songliubraving@fb.com>
>>> ---
>>>    .../selftests/bpf/prog_tests/attach_probe.c   | 166 ++++++++++++++++++
>>>    .../selftests/bpf/progs/test_attach_probe.c   |  55 ++++++
>>>    2 files changed, 221 insertions(+)
>>>    create mode 100644 tools/testing/selftests/bpf/prog_tests/attach_probe.c
>>>    create mode 100644 tools/testing/selftests/bpf/progs/test_attach_probe.c
>>>
>>> diff --git a/tools/testing/selftests/bpf/prog_tests/attach_probe.c b/tools/testing/selftests/bpf/prog_tests/attach_probe.c
>>> new file mode 100644
>>> index 000000000000..a4686395522c
>>> --- /dev/null
>>> +++ b/tools/testing/selftests/bpf/prog_tests/attach_probe.c
>>> @@ -0,0 +1,166 @@
>>> +// SPDX-License-Identifier: GPL-2.0
>>> +#include <test_progs.h>
>>> +
>>> +ssize_t get_base_addr() {
>>> +     size_t start;
>>> +     char buf[256];
>>> +     FILE *f;
>>> +
>>> +     f = fopen("/proc/self/maps", "r");
>>> +     if (!f)
>>> +             return -errno;
>>> +
>>> +     while (fscanf(f, "%zx-%*x %s %*s\n", &start, buf) == 2) {
>>> +             if (strcmp(buf, "r-xp") == 0) {
>>> +                     fclose(f);
>>> +                     return start;
>>> +             }
>>> +     }
>>> +
>>> +     fclose(f);
>>> +     return -EINVAL;
>>> +}
>>> +
>>> +#ifdef __x86_64__
>>> +#define SYS_KPROBE_NAME "__x64_sys_nanosleep"
>>> +#else
>>> +#define SYS_KPROBE_NAME "sys_nanosleep"
>>> +#endif
>>> +
>>> +void test_attach_probe(void)
>>> +{
>>> +     const char *kprobe_name = "kprobe/sys_nanosleep";
>>> +     const char *kretprobe_name = "kretprobe/sys_nanosleep";
>>> +     const char *uprobe_name = "uprobe/trigger_func";
>>> +     const char *uretprobe_name = "uretprobe/trigger_func";
>>> +     const int kprobe_idx = 0, kretprobe_idx = 1;
>>> +     const int uprobe_idx = 2, uretprobe_idx = 3;
>>> +     const char *file = "./test_attach_probe.o";
>>> +     struct bpf_program *kprobe_prog, *kretprobe_prog;
>>> +     struct bpf_program *uprobe_prog, *uretprobe_prog;
>>> +     struct bpf_object *obj;
>>> +     int err, prog_fd, duration = 0, res;
>>> +     struct bpf_link *kprobe_link = NULL;
>>> +     struct bpf_link *kretprobe_link = NULL;
>>> +     struct bpf_link *uprobe_link = NULL;
>>> +     struct bpf_link *uretprobe_link = NULL;
>>> +     int results_map_fd;
>>> +     size_t uprobe_offset;
>>> +     ssize_t base_addr;
>>> +
>>> +     base_addr = get_base_addr();
>>> +     if (CHECK(base_addr < 0, "get_base_addr",
>>> +               "failed to find base addr: %zd", base_addr))
>>> +             return;
>>> +     uprobe_offset = (size_t)&get_base_addr - base_addr;
>>> +
>>> +     /* load programs */
>>> +     err = bpf_prog_load(file, BPF_PROG_TYPE_KPROBE, &obj, &prog_fd);
>>> +     if (CHECK(err, "obj_load", "err %d errno %d\n", err, errno))
>>> +             return;
>>> +
>>> +     kprobe_prog = bpf_object__find_program_by_title(obj, kprobe_name);
>>> +     if (CHECK(!kprobe_prog, "find_probe",
>>> +               "prog '%s' not found\n", kprobe_name))
>>> +             goto cleanup;
>>> +     kretprobe_prog = bpf_object__find_program_by_title(obj, kretprobe_name);
>>> +     if (CHECK(!kretprobe_prog, "find_probe",
>>> +               "prog '%s' not found\n", kretprobe_name))
>>> +             goto cleanup;
>>> +     uprobe_prog = bpf_object__find_program_by_title(obj, uprobe_name);
>>> +     if (CHECK(!uprobe_prog, "find_probe",
>>> +               "prog '%s' not found\n", uprobe_name))
>>> +             goto cleanup;
>>> +     uretprobe_prog = bpf_object__find_program_by_title(obj, uretprobe_name);
>>> +     if (CHECK(!uretprobe_prog, "find_probe",
>>> +               "prog '%s' not found\n", uretprobe_name))
>>> +             goto cleanup;
>>> +
>>> +     /* load maps */
>>> +     results_map_fd = bpf_find_map(__func__, obj, "results_map");
>>> +     if (CHECK(results_map_fd < 0, "find_results_map",
>>> +               "err %d\n", results_map_fd))
>>> +             goto cleanup;
>>> +
>>> +     kprobe_link = bpf_program__attach_kprobe(kprobe_prog,
>>> +                                              false /* retprobe */,
>>> +                                              SYS_KPROBE_NAME);
>>> +     if (CHECK(IS_ERR(kprobe_link), "attach_kprobe",
>>> +               "err %ld\n", PTR_ERR(kprobe_link))) {
>>> +             kprobe_link = NULL;
>>> +             goto cleanup;
>>> +     }
>>> +     kretprobe_link = bpf_program__attach_kprobe(kretprobe_prog,
>>> +                                                 true /* retprobe */,
>>> +                                                 SYS_KPROBE_NAME);
>>> +     if (CHECK(IS_ERR(kretprobe_link), "attach_kretprobe",
>>> +               "err %ld\n", PTR_ERR(kretprobe_link))) {
>>> +             kretprobe_link = NULL;
>>> +             goto cleanup;
>>> +     }
>>> +     uprobe_link = bpf_program__attach_uprobe(uprobe_prog,
>>> +                                              false /* retprobe */,
>>> +                                              0 /* self pid */,
>>> +                                              "/proc/self/exe",
>>> +                                              uprobe_offset);
>>> +     if (CHECK(IS_ERR(uprobe_link), "attach_uprobe",
>>> +               "err %ld\n", PTR_ERR(uprobe_link))) {
>>> +             uprobe_link = NULL;
>>> +             goto cleanup;
>>> +     }
>>> +     uretprobe_link = bpf_program__attach_uprobe(uretprobe_prog,
>>> +                                                 true /* retprobe */,
>>> +                                                 -1 /* any pid */,
>>> +                                                 "/proc/self/exe",
>>> +                                                 uprobe_offset);
>>> +     if (CHECK(IS_ERR(uretprobe_link), "attach_uretprobe",
>>> +               "err %ld\n", PTR_ERR(uretprobe_link))) {
>>> +             uretprobe_link = NULL;
>>> +             goto cleanup;
>>> +     }
>>> +
>>> +     /* trigger & validate kprobe && kretprobe */
>>> +     usleep(1);
>>> +
>>> +     err = bpf_map_lookup_elem(results_map_fd, &kprobe_idx, &res);
>>> +     if (CHECK(err, "get_kprobe_res",
>>> +               "failed to get kprobe res: %d\n", err))
>>> +             goto cleanup;
>>> +     if (CHECK(res != kprobe_idx + 1, "check_kprobe_res",
>>> +               "wrong kprobe res: %d\n", res))
>>> +             goto cleanup;
>>> +
>>> +     err = bpf_map_lookup_elem(results_map_fd, &kretprobe_idx, &res);
>>> +     if (CHECK(err, "get_kretprobe_res",
>>> +               "failed to get kretprobe res: %d\n", err))
>>> +             goto cleanup;
>>> +     if (CHECK(res != kretprobe_idx + 1, "check_kretprobe_res",
>>> +               "wrong kretprobe res: %d\n", res))
>>> +             goto cleanup;
>>> +
>>> +     /* trigger & validate uprobe & uretprobe */
>>> +     get_base_addr();
>>> +
>>> +     err = bpf_map_lookup_elem(results_map_fd, &uprobe_idx, &res);
>>> +     if (CHECK(err, "get_uprobe_res",
>>> +               "failed to get uprobe res: %d\n", err))
>>> +             goto cleanup;
>>> +     if (CHECK(res != uprobe_idx + 1, "check_uprobe_res",
>>> +               "wrong uprobe res: %d\n", res))
>>> +             goto cleanup;
>>> +
>>> +     err = bpf_map_lookup_elem(results_map_fd, &uretprobe_idx, &res);
>>> +     if (CHECK(err, "get_uretprobe_res",
>>> +               "failed to get uretprobe res: %d\n", err))
>>> +             goto cleanup;
>>> +     if (CHECK(res != uretprobe_idx + 1, "check_uretprobe_res",
>>> +               "wrong uretprobe res: %d\n", res))
>>> +             goto cleanup;
>>> +
>>> +cleanup:
>>> +     bpf_link__destroy(kprobe_link);
>>> +     bpf_link__destroy(kretprobe_link);
>>> +     bpf_link__destroy(uprobe_link);
>>> +     bpf_link__destroy(uretprobe_link);
>>> +     bpf_object__close(obj);
>>> +}
>>> diff --git a/tools/testing/selftests/bpf/progs/test_attach_probe.c b/tools/testing/selftests/bpf/progs/test_attach_probe.c
>>> new file mode 100644
>>> index 000000000000..7a7c5cd728c8
>>> --- /dev/null
>>> +++ b/tools/testing/selftests/bpf/progs/test_attach_probe.c
>>> @@ -0,0 +1,55 @@
>>> +// SPDX-License-Identifier: GPL-2.0
>>> +// Copyright (c) 2017 Facebook
>>> +
>>> +#include <linux/ptrace.h>
>>> +#include <linux/bpf.h>
>>> +#include "bpf_helpers.h"
>>> +
>>> +struct {
>>> +     int type;
>>> +     int max_entries;
>>> +     int *key;
>>> +     int *value;
>>> +} results_map SEC(".maps") = {
>>> +     .type = BPF_MAP_TYPE_ARRAY,
>>> +     .max_entries = 4,
>>> +};
>>
>> After the new .maps convention patch is merged, test_progs is broken due
>> to this. The above .maps definition needs to be updated to
>>
>> struct {
>>          __uint(type, BPF_MAP_TYPE_ARRAY);
>>          __uint(max_entries, 4);
>>          __type(key, int);
>>          __type(value, int);
>> } results_map SEC(".maps");
>>
> 
> Yep, noticed that yesterday. Fixed in [0].
> 
>    [0] https://urldefense.proofpoint.com/v2/url?u=https-3A__patchwork.ozlabs.org_patch_1128383_&d=DwIBaQ&c=5VD0RTtNlTh3ycd41b3MUw&r=DA8e1B5r073vIqRrFz7MRA&m=HOjHmyyEXoOHq5292pOm6Ai8KJSyeLjdFOHQ_T8oAdI&s=vRc6OsEL3WigbFGNcfJUWeBCPbbyxn8g_r2SeTgoM-I&e=
> 

Oh, sorry. forgot this patch. Indeed, it fixed the issue.

>>> +
>>> +SEC("kprobe/sys_nanosleep")
>>> +int handle_sys_nanosleep_entry(struct pt_regs *ctx)
>>> +{
>>> +     const int key = 0, value = 1;
>>> +
>>> +     bpf_map_update_elem(&results_map, &key, &value, 0);
>>> +     return 0;
>>> +}
>>> +
>>> +SEC("kretprobe/sys_nanosleep")
>>> +int handle_sys_getpid_return(struct pt_regs *ctx)
>>> +{
>>> +     const int key = 1, value = 2;
>>> +
>>> +     bpf_map_update_elem(&results_map, &key, &value, 0);
>>> +     return 0;
>>> +}
>>> +
>>> +SEC("uprobe/trigger_func")
>>> +int handle_uprobe_entry(struct pt_regs *ctx)
>>> +{
>>> +     const int key = 2, value = 3;
>>> +
>>> +     bpf_map_update_elem(&results_map, &key, &value, 0);
>>> +     return 0;
>>> +}
>>> +
>>> +SEC("uretprobe/trigger_func")
>>> +int handle_uprobe_return(struct pt_regs *ctx)
>>> +{
>>> +     const int key = 3, value = 4;
>>> +
>>> +     bpf_map_update_elem(&results_map, &key, &value, 0);
>>> +     return 0;
>>> +}
>>> +
>>> +char _license[] SEC("license") = "GPL";
>>> +__u32 _version SEC("version") = 1;
>>>

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2019-07-06 17:49 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-01 23:58 [PATCH v5 bpf-next 0/9] libbpf: add bpf_link and tracing attach APIs Andrii Nakryiko
2019-07-01 23:58 ` [PATCH v5 bpf-next 1/9] libbpf: make libbpf_strerror_r agnostic to sign of error Andrii Nakryiko
2019-07-01 23:58 ` [PATCH v5 bpf-next 2/9] libbpf: introduce concept of bpf_link Andrii Nakryiko
2019-07-01 23:58 ` [PATCH v5 bpf-next 3/9] libbpf: add ability to attach/detach BPF program to perf event Andrii Nakryiko
2019-07-01 23:58 ` [PATCH v5 bpf-next 4/9] libbpf: add kprobe/uprobe attach API Andrii Nakryiko
2019-07-03 12:39   ` Daniel Borkmann
2019-07-03 16:47     ` Andrii Nakryiko
2019-07-04  0:57       ` Andrii Nakryiko
2019-07-05 20:46         ` Daniel Borkmann
2019-07-06  3:48           ` Andrii Nakryiko
2019-07-01 23:58 ` [PATCH v5 bpf-next 5/9] libbpf: add tracepoint " Andrii Nakryiko
2019-07-01 23:59 ` [PATCH v5 bpf-next 6/9] libbpf: add raw " Andrii Nakryiko
2019-07-01 23:59 ` [PATCH v5 bpf-next 7/9] selftests/bpf: switch test to new attach_perf_event API Andrii Nakryiko
2019-07-01 23:59 ` [PATCH v5 bpf-next 8/9] selftests/bpf: add kprobe/uprobe selftests Andrii Nakryiko
2019-07-06 17:21   ` Yonghong Song
2019-07-06 17:42     ` Andrii Nakryiko
2019-07-06 17:48       ` Yonghong Song
2019-07-01 23:59 ` [PATCH v5 bpf-next 9/9] selftests/bpf: convert existing tracepoint tests to new APIs Andrii Nakryiko
2019-07-02  0:36 ` [PATCH v5 bpf-next 0/9] libbpf: add bpf_link and tracing attach APIs Yonghong Song

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).