All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/7] bpf: implement BPF_PERF_EVENT_QUERY for perf event query
@ 2018-05-15 23:45 Yonghong Song
  2018-05-15 23:45 ` [PATCH bpf-next 1/7] perf/core: add perf_get_event() to return perf_event given a struct file Yonghong Song
                   ` (6 more replies)
  0 siblings, 7 replies; 15+ messages in thread
From: Yonghong Song @ 2018-05-15 23:45 UTC (permalink / raw)
  To: peterz, ast, daniel, netdev; +Cc: kernel-team

Currently, suppose a userspace application has loaded a bpf program
and attached it to a tracepoint/kprobe/uprobe, and a bpf
introspection tool, e.g., bpftool, wants to show which bpf program
is attached to which tracepoint/kprobe/uprobe. Such attachment
information will be really useful to understand the overall bpf
deployment in the system.

There is a name field (16 bytes) for each program, which could
be used to encode the attachment point. There are some drawbacks
for this approaches. First, bpftool user (e.g., an admin) may not
really understand the association between the name and the
attachment point. Second, if one program is attached to multiple
places, encoding a proper name which can imply all these
attachments becomes difficult.

This patch introduces a new bpf subcommand BPF_PERF_EVENT_QUERY.
Given a pid and fd, if the <pid, fd> is associated with a
tracepoint/kprobe/uprobea perf event, BPF_PERF_EVENT_QUERY will return
   . prog_id
   . tracepoint name, or
   . k[ret]probe funcname + offset or kernel addr, or
   . u[ret]probe filename + offset
to the userspace.
The user can use "bpftool prog" to find more information about
bpf program itself with prog_id.

Patch #1 adds function perf_get_event() in kernel/events/core.c.
Patch #2 implements the bpf subcommand BPF_PERF_EVENT_QUERY.
Patch #3 syncs tools bpf.h header and also add bpf_trace_event_query()
in the libbpf library for samples/selftests/bpftool to use.
Patch #4 adds ksym_get_addr() utility function.
Patch #5 add a test in samples/bpf for querying k[ret]probes and
u[ret]probes.
Patch #6 add a test in tools/testing/selftests/bpf for querying
raw_tracepoint and tracepoint.
Patch #7 add a new subcommand "perf" to bpftool.

Yonghong Song (7):
  perf/core: add perf_get_event() to return perf_event given a struct
    file
  bpf: introduce bpf subcommand BPF_PERF_EVENT_QUERY
  tools/bpf: sync kernel header bpf.h and add bpf_trace_event_query in
    libbpf
  tools/bpf: add ksym_get_addr() in trace_helpers
  samples/bpf: add a samples/bpf test for BPF_PERF_EVENT_QUERY
  tools/bpf: add two BPF_PERF_EVENT_QUERY tests in test_progs
  tools/bpftool: add perf subcommand

 include/linux/perf_event.h                  |   5 +
 include/linux/trace_events.h                |  15 ++
 include/uapi/linux/bpf.h                    |  25 ++
 kernel/bpf/syscall.c                        | 113 +++++++++
 kernel/events/core.c                        |   8 +
 kernel/trace/bpf_trace.c                    |  53 ++++
 kernel/trace/trace_kprobe.c                 |  29 +++
 kernel/trace/trace_uprobe.c                 |  22 ++
 samples/bpf/Makefile                        |   4 +
 samples/bpf/perf_event_query_kern.c         |  19 ++
 samples/bpf/perf_event_query_user.c         | 376 ++++++++++++++++++++++++++++
 tools/bpf/bpftool/main.c                    |   3 +-
 tools/bpf/bpftool/main.h                    |   1 +
 tools/bpf/bpftool/perf.c                    | 188 ++++++++++++++
 tools/include/uapi/linux/bpf.h              |  25 ++
 tools/lib/bpf/bpf.c                         |  23 ++
 tools/lib/bpf/bpf.h                         |   3 +
 tools/testing/selftests/bpf/test_progs.c    | 133 ++++++++++
 tools/testing/selftests/bpf/trace_helpers.c |  12 +
 tools/testing/selftests/bpf/trace_helpers.h |   1 +
 20 files changed, 1057 insertions(+), 1 deletion(-)
 create mode 100644 samples/bpf/perf_event_query_kern.c
 create mode 100644 samples/bpf/perf_event_query_user.c
 create mode 100644 tools/bpf/bpftool/perf.c

-- 
2.9.5

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH bpf-next 1/7] perf/core: add perf_get_event() to return perf_event given a struct file
  2018-05-15 23:45 [PATCH bpf-next 0/7] bpf: implement BPF_PERF_EVENT_QUERY for perf event query Yonghong Song
@ 2018-05-15 23:45 ` Yonghong Song
  2018-05-15 23:45 ` [PATCH bpf-next 2/7] bpf: introduce bpf subcommand BPF_PERF_EVENT_QUERY Yonghong Song
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Yonghong Song @ 2018-05-15 23:45 UTC (permalink / raw)
  To: peterz, ast, daniel, netdev; +Cc: kernel-team

A new extern function, perf_get_event(), is added to return a perf event
given a struct file. This function will be used in later patches.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 include/linux/perf_event.h | 5 +++++
 kernel/events/core.c       | 8 ++++++++
 2 files changed, 13 insertions(+)

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index e71e99e..b5c1ad3 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -868,6 +868,7 @@ extern void perf_event_exit_task(struct task_struct *child);
 extern void perf_event_free_task(struct task_struct *task);
 extern void perf_event_delayed_put(struct task_struct *task);
 extern struct file *perf_event_get(unsigned int fd);
+extern struct perf_event *perf_get_event(struct file *file);
 extern const struct perf_event_attr *perf_event_attrs(struct perf_event *event);
 extern void perf_event_print_debug(void);
 extern void perf_pmu_disable(struct pmu *pmu);
@@ -1289,6 +1290,10 @@ static inline void perf_event_exit_task(struct task_struct *child)	{ }
 static inline void perf_event_free_task(struct task_struct *task)	{ }
 static inline void perf_event_delayed_put(struct task_struct *task)	{ }
 static inline struct file *perf_event_get(unsigned int fd)	{ return ERR_PTR(-EINVAL); }
+static inline struct perf_event *perf_get_event(struct file *file)
+{
+	return ERR_PTR(-EINVAL);
+}
 static inline const struct perf_event_attr *perf_event_attrs(struct perf_event *event)
 {
 	return ERR_PTR(-EINVAL);
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 67612ce..1e3cddb 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -11212,6 +11212,14 @@ struct file *perf_event_get(unsigned int fd)
 	return file;
 }
 
+struct perf_event *perf_get_event(struct file *file)
+{
+	if (file->f_op != &perf_fops)
+		return ERR_PTR(-EINVAL);
+
+	return file->private_data;
+}
+
 const struct perf_event_attr *perf_event_attrs(struct perf_event *event)
 {
 	if (!event)
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH bpf-next 2/7] bpf: introduce bpf subcommand BPF_PERF_EVENT_QUERY
  2018-05-15 23:45 [PATCH bpf-next 0/7] bpf: implement BPF_PERF_EVENT_QUERY for perf event query Yonghong Song
  2018-05-15 23:45 ` [PATCH bpf-next 1/7] perf/core: add perf_get_event() to return perf_event given a struct file Yonghong Song
@ 2018-05-15 23:45 ` Yonghong Song
  2018-05-16 11:27   ` Peter Zijlstra
  2018-05-17 23:52   ` kbuild test robot
  2018-05-15 23:45 ` [PATCH bpf-next 3/7] tools/bpf: sync kernel header bpf.h and add bpf_trace_event_query in libbpf Yonghong Song
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 15+ messages in thread
From: Yonghong Song @ 2018-05-15 23:45 UTC (permalink / raw)
  To: peterz, ast, daniel, netdev; +Cc: kernel-team

Currently, suppose a userspace application has loaded a bpf program
and attached it to a tracepoint/kprobe/uprobe, and a bpf
introspection tool, e.g., bpftool, wants to show which bpf program
is attached to which tracepoint/kprobe/uprobe. Such attachment
information will be really useful to understand the overall bpf
deployment in the system.

There is a name field (16 bytes) for each program, which could
be used to encode the attachment point. There are some drawbacks
for this approaches. First, bpftool user (e.g., an admin) may not
really understand the association between the name and the
attachment point. Second, if one program is attached to multiple
places, encoding a proper name which can imply all these
attachments becomes difficult.

This patch introduces a new bpf subcommand BPF_PERF_EVENT_QUERY.
Given a pid and fd, if the <pid, fd> is associated with a
tracepoint/kprobe/uprobea perf event, BPF_PERF_EVENT_QUERY will return
   . prog_id
   . tracepoint name, or
   . k[ret]probe funcname + offset or kernel addr, or
   . u[ret]probe filename + offset
to the userspace.
The user can use "bpftool prog" to find more information about
bpf program itself with prog_id.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 include/linux/trace_events.h |  15 ++++++
 include/uapi/linux/bpf.h     |  25 ++++++++++
 kernel/bpf/syscall.c         | 113 +++++++++++++++++++++++++++++++++++++++++++
 kernel/trace/bpf_trace.c     |  53 ++++++++++++++++++++
 kernel/trace/trace_kprobe.c  |  29 +++++++++++
 kernel/trace/trace_uprobe.c  |  22 +++++++++
 6 files changed, 257 insertions(+)

diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 2bde3ef..ec1f604 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -473,6 +473,9 @@ int perf_event_query_prog_array(struct perf_event *event, void __user *info);
 int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *prog);
 int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_prog *prog);
 struct bpf_raw_event_map *bpf_find_raw_tracepoint(const char *name);
+int bpf_get_perf_event_info(struct file *file, u32 *prog_id, u32 *prog_info,
+			    const char **buf, u64 *probe_offset,
+			    u64 *probe_addr);
 #else
 static inline unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx)
 {
@@ -504,6 +507,12 @@ static inline struct bpf_raw_event_map *bpf_find_raw_tracepoint(const char *name
 {
 	return NULL;
 }
+static inline int bpf_get_perf_event_info(struct file *file, u32 *prog_id,
+					  u32 *prog_info, const char **buf,
+					  u64 *probe_offset, u64 *probe_addr)
+{
+	return -EOPNOTSUPP;
+}
 #endif
 
 enum {
@@ -560,10 +569,16 @@ extern void perf_trace_del(struct perf_event *event, int flags);
 #ifdef CONFIG_KPROBE_EVENTS
 extern int  perf_kprobe_init(struct perf_event *event, bool is_retprobe);
 extern void perf_kprobe_destroy(struct perf_event *event);
+extern int bpf_get_kprobe_info(struct perf_event *event, u32 *prog_info,
+			       const char **symbol, u64 *probe_offset,
+			       u64 *probe_addr, bool perf_type_tracepoint);
 #endif
 #ifdef CONFIG_UPROBE_EVENTS
 extern int  perf_uprobe_init(struct perf_event *event, bool is_retprobe);
 extern void perf_uprobe_destroy(struct perf_event *event);
+extern int bpf_get_uprobe_info(struct perf_event *event, u32 *prog_info,
+			       const char **filename, u64 *probe_offset,
+			       bool perf_type_tracepoint);
 #endif
 extern int  ftrace_profile_set_filter(struct perf_event *event, int event_id,
 				     char *filter_str);
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index d94d333..b78eca1 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -97,6 +97,7 @@ enum bpf_cmd {
 	BPF_RAW_TRACEPOINT_OPEN,
 	BPF_BTF_LOAD,
 	BPF_BTF_GET_FD_BY_ID,
+	BPF_PERF_EVENT_QUERY,
 };
 
 enum bpf_map_type {
@@ -379,6 +380,22 @@ union bpf_attr {
 		__u32		btf_log_size;
 		__u32		btf_log_level;
 	};
+
+	struct {
+		int		pid;		/* input: pid */
+		int		fd;		/* input: fd */
+		__u32		flags;		/* input: flags */
+		__u32		buf_len;	/* input: buf len */
+		__aligned_u64	buf;		/* input/output:
+						 *   tp_name for tracepoint
+						 *   symbol for kprobe
+						 *   filename for uprobe
+						 */
+		__u32		prog_id;	/* output: prod_id */
+		__u32		prog_info;	/* output: BPF_PERF_INFO_* */
+		__u64		probe_offset;	/* output: probe_offset */
+		__u64		probe_addr;	/* output: probe_addr */
+	} perf_event_query;
 } __attribute__((aligned(8)));
 
 /* The description below is an attempt at providing documentation to eBPF
@@ -2450,4 +2467,12 @@ struct bpf_fib_lookup {
 	__u8	dmac[6];     /* ETH_ALEN */
 };
 
+enum {
+	BPF_PERF_INFO_TP_NAME,		/* tp name */
+	BPF_PERF_INFO_KPROBE,		/* (symbol + offset) or addr */
+	BPF_PERF_INFO_KRETPROBE,	/* (symbol + offset) or addr */
+	BPF_PERF_INFO_UPROBE,		/* filename + offset */
+	BPF_PERF_INFO_URETPROBE,	/* filename + offset */
+};
+
 #endif /* _UAPI__LINUX_BPF_H__ */
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index e2aeb5e..347e4d2 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -18,7 +18,9 @@
 #include <linux/vmalloc.h>
 #include <linux/mmzone.h>
 #include <linux/anon_inodes.h>
+#include <linux/fdtable.h>
 #include <linux/file.h>
+#include <linux/fs.h>
 #include <linux/license.h>
 #include <linux/filter.h>
 #include <linux/version.h>
@@ -2093,6 +2095,114 @@ static int bpf_btf_get_fd_by_id(const union bpf_attr *attr)
 	return btf_get_fd_by_id(attr->btf_id);
 }
 
+static int bpf_perf_event_info_copy(const union bpf_attr *attr,
+				    union bpf_attr __user *uattr,
+				    u32 prog_id, u32 prog_info,
+				    const char *buf, u64 probe_offset,
+				    u64 probe_addr)
+{
+	__u64 __user *ubuf;
+	int len;
+
+	ubuf = u64_to_user_ptr(attr->perf_event_query.buf);
+	if (buf) {
+		len = strlen(buf);
+		if (attr->perf_event_query.buf_len < len + 1)
+			return -ENOSPC;
+		if (copy_to_user(ubuf, buf, len + 1))
+			return -EFAULT;
+	} else if (attr->perf_event_query.buf_len) {
+		/* copy '\0' to ubuf */
+		__u8 zero = 0;
+
+		if (copy_to_user(ubuf, &zero, 1))
+			return -EFAULT;
+	}
+
+	if (copy_to_user(&uattr->perf_event_query.prog_id, &prog_id,
+			 sizeof(prog_id)) ||
+	    copy_to_user(&uattr->perf_event_query.prog_info, &prog_info,
+			 sizeof(prog_info)) ||
+	    copy_to_user(&uattr->perf_event_query.probe_offset, &probe_offset,
+			 sizeof(probe_offset)) ||
+	    copy_to_user(&uattr->perf_event_query.probe_addr, &probe_addr,
+			 sizeof(probe_addr)))
+		return -EFAULT;
+
+	return 0;
+}
+
+#define BPF_PERF_EVENT_QUERY_LAST_FIELD perf_event_query.probe_addr
+
+static int bpf_perf_event_query(const union bpf_attr *attr,
+				union bpf_attr __user *uattr)
+{
+	pid_t pid = attr->perf_event_query.pid;
+	int fd = attr->perf_event_query.fd;
+	struct files_struct *files;
+	struct task_struct *task;
+	struct file *file;
+	int err;
+
+	if (CHECK_ATTR(BPF_PERF_EVENT_QUERY))
+		return -EINVAL;
+
+	if (!capable(CAP_SYS_ADMIN))
+		return -EPERM;
+
+	task = get_pid_task(find_vpid(pid), PIDTYPE_PID);
+	if (!task)
+		return -ENOENT;
+
+	files = get_files_struct(task);
+	put_task_struct(task);
+	if (!files)
+		return -ENOENT;
+
+	err = 0;
+	spin_lock(&files->file_lock);
+	file = fcheck_files(files, fd);
+	if (!file)
+		err = -ENOENT;
+	else
+		get_file(file);
+	spin_unlock(&files->file_lock);
+	put_files_struct(files);
+
+	if (err)
+		goto out;
+
+	if (file->f_op == &bpf_raw_tp_fops) {
+		struct bpf_raw_tracepoint *raw_tp = file->private_data;
+		struct bpf_raw_event_map *btp = raw_tp->btp;
+
+		if (!raw_tp->prog)
+			err = -ENOENT;
+		else
+			err = bpf_perf_event_info_copy(attr, uattr,
+						       raw_tp->prog->aux->id,
+						       BPF_PERF_INFO_TP_NAME,
+						       btp->tp->name, 0, 0);
+	} else {
+		u64 probe_offset, probe_addr;
+		u32 prog_id, prog_info;
+		const char *buf;
+
+		err = bpf_get_perf_event_info(file, &prog_id, &prog_info,
+					      &buf, &probe_offset,
+					      &probe_addr);
+		if (!err)
+			err = bpf_perf_event_info_copy(attr, uattr, prog_id,
+						       prog_info, buf,
+						       probe_offset,
+						       probe_addr);
+	}
+
+	fput(file);
+out:
+	return err;
+}
+
 SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, size)
 {
 	union bpf_attr attr = {};
@@ -2179,6 +2289,9 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
 	case BPF_BTF_GET_FD_BY_ID:
 		err = bpf_btf_get_fd_by_id(&attr);
 		break;
+	case BPF_PERF_EVENT_QUERY:
+		err = bpf_perf_event_query(&attr, uattr);
+		break;
 	default:
 		err = -EINVAL;
 		break;
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index ce2cbbf..7e8121e 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -14,6 +14,7 @@
 #include <linux/uaccess.h>
 #include <linux/ctype.h>
 #include <linux/kprobes.h>
+#include <linux/syscalls.h>
 #include <linux/error-injection.h>
 
 #include "trace_probe.h"
@@ -1163,3 +1164,55 @@ int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_prog *prog)
 	mutex_unlock(&bpf_event_mutex);
 	return err;
 }
+
+int bpf_get_perf_event_info(struct file *file, u32 *prog_id, u32 *prog_info,
+			    const char **buf, u64 *probe_offset,
+			    u64 *probe_addr)
+{
+	bool is_tracepoint, is_syscall_tp;
+	struct perf_event *event;
+	struct bpf_prog *prog;
+	int flags, err = 0;
+
+	event = perf_get_event(file);
+	if (IS_ERR(event))
+		return PTR_ERR(event);
+
+	prog = event->prog;
+	if (!prog)
+		return -ENOENT;
+
+	/* not supporting BPF_PROG_TYPE_PERF_EVENT yet */
+	if (prog->type == BPF_PROG_TYPE_PERF_EVENT)
+		return -EOPNOTSUPP;
+
+	*prog_id = prog->aux->id;
+	flags = event->tp_event->flags;
+	is_tracepoint = flags & TRACE_EVENT_FL_TRACEPOINT;
+	is_syscall_tp = is_syscall_trace_event(event->tp_event);
+
+	if (is_tracepoint || is_syscall_tp) {
+		*buf = is_tracepoint ? event->tp_event->tp->name
+				     : event->tp_event->name;
+		*prog_info = BPF_PERF_INFO_TP_NAME;
+		*probe_offset = 0x0;
+		*probe_addr = 0x0;
+	} else {
+		/* kprobe/uprobe */
+		err = -EOPNOTSUPP;
+#ifdef CONFIG_KPROBE_EVENTS
+		if (flags & TRACE_EVENT_FL_KPROBE)
+			err = bpf_get_kprobe_info(event, prog_info, buf,
+						  probe_offset, probe_addr,
+						  event->attr.type == PERF_TYPE_TRACEPOINT);
+#endif
+#ifdef CONFIG_UPROBE_EVENTS
+		if (flags & TRACE_EVENT_FL_UPROBE)
+			err = bpf_get_uprobe_info(event, prog_info, buf,
+						  probe_offset,
+						  event->attr.type == PERF_TYPE_TRACEPOINT);
+#endif
+	}
+
+	return err;
+}
diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index 02aed76..595d154 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -1287,6 +1287,35 @@ kretprobe_perf_func(struct trace_kprobe *tk, struct kretprobe_instance *ri,
 			      head, NULL);
 }
 NOKPROBE_SYMBOL(kretprobe_perf_func);
+
+int bpf_get_kprobe_info(struct perf_event *event, u32 *prog_info,
+			const char **symbol, u64 *probe_offset,
+			u64 *probe_addr, bool perf_type_tracepoint)
+{
+	const char *pevent = trace_event_name(event->tp_event);
+	const char *group = event->tp_event->class->system;
+	struct trace_kprobe *tk;
+
+	if (perf_type_tracepoint)
+		tk = find_trace_kprobe(pevent, group);
+	else
+		tk = event->tp_event->data;
+	if (!tk)
+		return -EINVAL;
+
+	*prog_info = trace_kprobe_is_return(tk) ? BPF_PERF_INFO_KRETPROBE
+						: BPF_PERF_INFO_KPROBE;
+	if (tk->symbol) {
+		*symbol = tk->symbol;
+		*probe_offset = tk->rp.kp.offset;
+		*probe_addr = 0;
+	} else {
+		*symbol = NULL;
+		*probe_offset = 0;
+		*probe_addr = (u64)tk->rp.kp.addr;
+	}
+	return 0;
+}
 #endif	/* CONFIG_PERF_EVENTS */
 
 /*
diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
index ac89287..e781a9f 100644
--- a/kernel/trace/trace_uprobe.c
+++ b/kernel/trace/trace_uprobe.c
@@ -1161,6 +1161,28 @@ static void uretprobe_perf_func(struct trace_uprobe *tu, unsigned long func,
 {
 	__uprobe_perf_func(tu, func, regs, ucb, dsize);
 }
+
+int bpf_get_uprobe_info(struct perf_event *event, u32 *prog_info,
+			const char **filename, u64 *probe_offset,
+			bool perf_type_tracepoint)
+{
+	const char *pevent = trace_event_name(event->tp_event);
+	const char *group = event->tp_event->class->system;
+	struct trace_uprobe *tu;
+
+	if (perf_type_tracepoint)
+		tu = find_probe_event(pevent, group);
+	else
+		tu = event->tp_event->data;
+	if (!tu)
+		return -EINVAL;
+
+	*prog_info = is_ret_probe(tu) ? BPF_PERF_INFO_URETPROBE
+				      : BPF_PERF_INFO_UPROBE;
+	*filename = tu->filename;
+	*probe_offset = tu->offset;
+	return 0;
+}
 #endif	/* CONFIG_PERF_EVENTS */
 
 static int
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH bpf-next 3/7] tools/bpf: sync kernel header bpf.h and add bpf_trace_event_query in libbpf
  2018-05-15 23:45 [PATCH bpf-next 0/7] bpf: implement BPF_PERF_EVENT_QUERY for perf event query Yonghong Song
  2018-05-15 23:45 ` [PATCH bpf-next 1/7] perf/core: add perf_get_event() to return perf_event given a struct file Yonghong Song
  2018-05-15 23:45 ` [PATCH bpf-next 2/7] bpf: introduce bpf subcommand BPF_PERF_EVENT_QUERY Yonghong Song
@ 2018-05-15 23:45 ` Yonghong Song
  2018-05-15 23:45 ` [PATCH bpf-next 4/7] tools/bpf: add ksym_get_addr() in trace_helpers Yonghong Song
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Yonghong Song @ 2018-05-15 23:45 UTC (permalink / raw)
  To: peterz, ast, daniel, netdev; +Cc: kernel-team

Sync kernel header bpf.h to tools/include/uapi/linux/bpf.h and
implement bpf_trace_event_query() in libbpf. The test programs
in samples/bpf and tools/testing/selftests/bpf, and later bpftool
will use this libbpf function to query kernel.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 tools/include/uapi/linux/bpf.h | 25 +++++++++++++++++++++++++
 tools/lib/bpf/bpf.c            | 23 +++++++++++++++++++++++
 tools/lib/bpf/bpf.h            |  3 +++
 3 files changed, 51 insertions(+)

diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 1205d86..a209f01 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -97,6 +97,7 @@ enum bpf_cmd {
 	BPF_RAW_TRACEPOINT_OPEN,
 	BPF_BTF_LOAD,
 	BPF_BTF_GET_FD_BY_ID,
+	BPF_PERF_EVENT_QUERY,
 };
 
 enum bpf_map_type {
@@ -379,6 +380,22 @@ union bpf_attr {
 		__u32		btf_log_size;
 		__u32		btf_log_level;
 	};
+
+	struct {
+		int		pid;		/* input: pid */
+		int		fd;		/* input: fd */
+		__u32		flags;		/* input: flags */
+		__u32		buf_len;	/* input: buf len */
+		__aligned_u64	buf;		/* input/output:
+						 *   tp_name for tracepoint
+						 *   symbol for kprobe
+						 *   filename for uprobe
+						 */
+		__u32		prog_id;	/* output: prod_id */
+		__u32		prog_info;	/* output: BPF_PERF_INFO_* */
+		__u64		probe_offset;	/* output: probe_offset */
+		__u64		probe_addr;	/* output: probe_addr */
+	} perf_event_query;
 } __attribute__((aligned(8)));
 
 /* The description below is an attempt at providing documentation to eBPF
@@ -2450,4 +2467,12 @@ struct bpf_fib_lookup {
 	__u8	dmac[6];     /* ETH_ALEN */
 };
 
+enum {
+	BPF_PERF_INFO_TP_NAME,		/* tp name */
+	BPF_PERF_INFO_KPROBE,		/* (symbol + offset) or addr */
+	BPF_PERF_INFO_KRETPROBE,	/* (symbol + offset) or addr */
+	BPF_PERF_INFO_UPROBE,		/* filename + offset */
+	BPF_PERF_INFO_URETPROBE,	/* filename + offset */
+};
+
 #endif /* _UAPI__LINUX_BPF_H__ */
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index a3a8fb2..e0152aa 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -641,3 +641,26 @@ int bpf_load_btf(void *btf, __u32 btf_size, char *log_buf, __u32 log_buf_size,
 
 	return fd;
 }
+
+int bpf_trace_event_query(int pid, int fd, char *buf, __u32 buf_len,
+			  __u32 *prog_id, __u32 *prog_info,
+			  __u64 *probe_offset, __u64 *probe_addr)
+{
+	union bpf_attr attr = {};
+	int err;
+
+	attr.perf_event_query.pid = pid;
+	attr.perf_event_query.fd = fd;
+	attr.perf_event_query.buf = ptr_to_u64(buf);
+	attr.perf_event_query.buf_len = buf_len;
+
+	err = sys_bpf(BPF_PERF_EVENT_QUERY, &attr, sizeof(attr));
+	if (!err) {
+		*prog_id = attr.perf_event_query.prog_id;
+		*prog_info = attr.perf_event_query.prog_info;
+		*probe_offset = attr.perf_event_query.probe_offset;
+		*probe_addr = attr.perf_event_query.probe_addr;
+	}
+
+	return err;
+}
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index fb3a146..53d05fc 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -105,4 +105,7 @@ int bpf_prog_query(int target_fd, enum bpf_attach_type type, __u32 query_flags,
 int bpf_raw_tracepoint_open(const char *name, int prog_fd);
 int bpf_load_btf(void *btf, __u32 btf_size, char *log_buf, __u32 log_buf_size,
 		 bool do_log);
+int bpf_trace_event_query(int pid, int fd, char *buf, __u32 buf_len,
+			  __u32 *prog_id, __u32 *prog_info,
+			  __u64 *probe_offset, __u64 *probe_addr);
 #endif
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH bpf-next 4/7] tools/bpf: add ksym_get_addr() in trace_helpers
  2018-05-15 23:45 [PATCH bpf-next 0/7] bpf: implement BPF_PERF_EVENT_QUERY for perf event query Yonghong Song
                   ` (2 preceding siblings ...)
  2018-05-15 23:45 ` [PATCH bpf-next 3/7] tools/bpf: sync kernel header bpf.h and add bpf_trace_event_query in libbpf Yonghong Song
@ 2018-05-15 23:45 ` Yonghong Song
  2018-05-15 23:45 ` [PATCH bpf-next 5/7] samples/bpf: add a samples/bpf test for BPF_PERF_EVENT_QUERY Yonghong Song
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Yonghong Song @ 2018-05-15 23:45 UTC (permalink / raw)
  To: peterz, ast, daniel, netdev; +Cc: kernel-team

Given a kernel function name, ksym_get_addr() will return the kernel
address for this function, or 0 if it cannot find this function name
in /proc/kallsyms. This function will be used later when a kernel
address is used to initiate a kprobe perf event.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 tools/testing/selftests/bpf/trace_helpers.c | 12 ++++++++++++
 tools/testing/selftests/bpf/trace_helpers.h |  1 +
 2 files changed, 13 insertions(+)

diff --git a/tools/testing/selftests/bpf/trace_helpers.c b/tools/testing/selftests/bpf/trace_helpers.c
index 8fb4fe8..3868dcb 100644
--- a/tools/testing/selftests/bpf/trace_helpers.c
+++ b/tools/testing/selftests/bpf/trace_helpers.c
@@ -72,6 +72,18 @@ struct ksym *ksym_search(long key)
 	return &syms[0];
 }
 
+long ksym_get_addr(const char *name)
+{
+	int i;
+
+	for (i = 0; i < sym_cnt; i++) {
+		if (strcmp(syms[i].name, name) == 0)
+			return syms[i].addr;
+	}
+
+	return 0;
+}
+
 static int page_size;
 static int page_cnt = 8;
 static struct perf_event_mmap_page *header;
diff --git a/tools/testing/selftests/bpf/trace_helpers.h b/tools/testing/selftests/bpf/trace_helpers.h
index 36d90e3..3b4bcf7 100644
--- a/tools/testing/selftests/bpf/trace_helpers.h
+++ b/tools/testing/selftests/bpf/trace_helpers.h
@@ -11,6 +11,7 @@ struct ksym {
 
 int load_kallsyms(void);
 struct ksym *ksym_search(long key);
+long ksym_get_addr(const char *name);
 
 typedef enum bpf_perf_event_ret (*perf_event_print_fn)(void *data, int size);
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH bpf-next 5/7] samples/bpf: add a samples/bpf test for BPF_PERF_EVENT_QUERY
  2018-05-15 23:45 [PATCH bpf-next 0/7] bpf: implement BPF_PERF_EVENT_QUERY for perf event query Yonghong Song
                   ` (3 preceding siblings ...)
  2018-05-15 23:45 ` [PATCH bpf-next 4/7] tools/bpf: add ksym_get_addr() in trace_helpers Yonghong Song
@ 2018-05-15 23:45 ` Yonghong Song
  2018-05-15 23:45 ` [PATCH bpf-next 6/7] tools/bpf: add two BPF_PERF_EVENT_QUERY tests in test_progs Yonghong Song
  2018-05-15 23:45 ` [PATCH bpf-next 7/7] tools/bpftool: add perf subcommand Yonghong Song
  6 siblings, 0 replies; 15+ messages in thread
From: Yonghong Song @ 2018-05-15 23:45 UTC (permalink / raw)
  To: peterz, ast, daniel, netdev; +Cc: kernel-team

This is mostly to test kprobe/uprobe which needs kernel headers.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 samples/bpf/Makefile                |   4 +
 samples/bpf/perf_event_query_kern.c |  19 ++
 samples/bpf/perf_event_query_user.c | 376 ++++++++++++++++++++++++++++++++++++
 3 files changed, 399 insertions(+)
 create mode 100644 samples/bpf/perf_event_query_kern.c
 create mode 100644 samples/bpf/perf_event_query_user.c

diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
index 62d1aa1..c23e8fe 100644
--- a/samples/bpf/Makefile
+++ b/samples/bpf/Makefile
@@ -51,6 +51,7 @@ hostprogs-y += cpustat
 hostprogs-y += xdp_adjust_tail
 hostprogs-y += xdpsock
 hostprogs-y += xdp_fwd
+hostprogs-y += perf_event_query
 
 # Libbpf dependencies
 LIBBPF = $(TOOLS_PATH)/lib/bpf/libbpf.a
@@ -105,6 +106,7 @@ cpustat-objs := bpf_load.o cpustat_user.o
 xdp_adjust_tail-objs := xdp_adjust_tail_user.o
 xdpsock-objs := bpf_load.o xdpsock_user.o
 xdp_fwd-objs := bpf_load.o xdp_fwd_user.o
+perf_event_query-objs := bpf_load.o perf_event_query_user.o $(TRACE_HELPERS)
 
 # Tell kbuild to always build the programs
 always := $(hostprogs-y)
@@ -160,6 +162,7 @@ always += cpustat_kern.o
 always += xdp_adjust_tail_kern.o
 always += xdpsock_kern.o
 always += xdp_fwd_kern.o
+always += perf_event_query_kern.o
 
 HOSTCFLAGS += -I$(objtree)/usr/include
 HOSTCFLAGS += -I$(srctree)/tools/lib/
@@ -175,6 +178,7 @@ HOSTCFLAGS_offwaketime_user.o += -I$(srctree)/tools/lib/bpf/
 HOSTCFLAGS_spintest_user.o += -I$(srctree)/tools/lib/bpf/
 HOSTCFLAGS_trace_event_user.o += -I$(srctree)/tools/lib/bpf/
 HOSTCFLAGS_sampleip_user.o += -I$(srctree)/tools/lib/bpf/
+HOSTCFLAGS_perf_event_query_user.o += -I$(srctree)/tools/lib/bpf/
 
 HOST_LOADLIBES		+= $(LIBBPF) -lelf
 HOSTLOADLIBES_tracex4		+= -lrt
diff --git a/samples/bpf/perf_event_query_kern.c b/samples/bpf/perf_event_query_kern.c
new file mode 100644
index 0000000..f4b0a9e
--- /dev/null
+++ b/samples/bpf/perf_event_query_kern.c
@@ -0,0 +1,19 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/version.h>
+#include <linux/ptrace.h>
+#include <uapi/linux/bpf.h>
+#include "bpf_helpers.h"
+
+SEC("kprobe/blk_start_request")
+int bpf_prog1(struct pt_regs *ctx)
+{
+	return 0;
+}
+
+SEC("kretprobe/blk_account_io_completion")
+int bpf_prog2(struct pt_regs *ctx)
+{
+	return 0;
+}
+char _license[] SEC("license") = "GPL";
+u32 _version SEC("version") = LINUX_VERSION_CODE;
diff --git a/samples/bpf/perf_event_query_user.c b/samples/bpf/perf_event_query_user.c
new file mode 100644
index 0000000..bf46578
--- /dev/null
+++ b/samples/bpf/perf_event_query_user.c
@@ -0,0 +1,376 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdbool.h>
+#include <string.h>
+#include <stdint.h>
+#include <fcntl.h>
+#include <linux/bpf.h>
+#include <sys/ioctl.h>
+#include <sys/resource.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+
+#include "libbpf.h"
+#include "bpf_load.h"
+#include "bpf_util.h"
+#include "perf-sys.h"
+#include "trace_helpers.h"
+
+#define CHECK_PERROR_RET(condition) ({			\
+	int __ret = !!(condition);			\
+	if (__ret) {					\
+		printf("FAIL: %s:\n", __func__);	\
+		perror("    ");			\
+		return -1;				\
+	}						\
+})
+
+#define CHECK_AND_RET(condition) ({			\
+	int __ret = !!(condition);			\
+	if (__ret)					\
+		return -1;				\
+})
+
+static __u64 ptr_to_u64(void *ptr)
+{
+	return (__u64) (unsigned long) ptr;
+}
+
+#define PMU_TYPE_FILE "/sys/bus/event_source/devices/%s/type"
+static int bpf_find_probe_type(const char *event_type)
+{
+	char buf[256];
+	int fd, ret;
+
+	ret = snprintf(buf, sizeof(buf), PMU_TYPE_FILE, event_type);
+	CHECK_PERROR_RET(ret < 0 || ret >= sizeof(buf));
+
+	fd = open(buf, O_RDONLY);
+	CHECK_PERROR_RET(fd < 0);
+
+	ret = read(fd, buf, sizeof(buf));
+	close(fd);
+	CHECK_PERROR_RET(ret < 0 || ret >= sizeof(buf));
+
+	errno = 0;
+	ret = (int)strtol(buf, NULL, 10);
+	CHECK_PERROR_RET(errno);
+	return ret;
+}
+
+#define PMU_RETPROBE_FILE "/sys/bus/event_source/devices/%s/format/retprobe"
+static int bpf_get_retprobe_bit(const char *event_type)
+{
+	char buf[256];
+	int fd, ret;
+
+	ret = snprintf(buf, sizeof(buf), PMU_RETPROBE_FILE, event_type);
+	CHECK_PERROR_RET(ret < 0 || ret >= sizeof(buf));
+
+	fd = open(buf, O_RDONLY);
+	CHECK_PERROR_RET(fd < 0);
+
+	ret = read(fd, buf, sizeof(buf));
+	close(fd);
+	CHECK_PERROR_RET(ret < 0 || ret >= sizeof(buf));
+	CHECK_PERROR_RET(strlen(buf) < strlen("config:"));
+
+	errno = 0;
+	ret = (int)strtol(buf + strlen("config:"), NULL, 10);
+	CHECK_PERROR_RET(errno);
+	return ret;
+}
+
+static int test_debug_fs_kprobe(int fd_idx, const char *fn_name,
+				__u32 expected_prog_info)
+{
+	__u64 probe_offset, probe_addr;
+	__u32 prog_id, prog_info;
+	char buf[256];
+	int err;
+
+	err = bpf_trace_event_query(getpid(), event_fd[fd_idx], buf, 256,
+		&prog_id, &prog_info, &probe_offset, &probe_addr);
+	if (err < 0) {
+		printf("FAIL: %s, for event_fd idx %d, fn_name %s\n",
+		       __func__, fd_idx, fn_name);
+		perror("    :");
+		return -1;
+	}
+	if (strcmp(buf, fn_name) != 0 ||
+	    prog_info != expected_prog_info ||
+	    probe_offset != 0x0 || probe_addr != 0x0) {
+		printf("FAIL: bpf_trace_event_query(event_fd[1]):\n");
+		printf("buf: %s, prog_info: %u, probe_offset: 0x%llx,"
+		       " probe_addr: 0x%llx\n",
+		       buf, prog_info, probe_offset, probe_addr);
+		return -1;
+	}
+	return 0;
+}
+
+static int test_nondebug_fs_kuprobe_common(const char *event_type,
+	const char *name, __u64 offset, __u64 addr, bool is_return,
+	char *buf, int buf_len, __u32 *prog_id, __u32 *prog_info,
+	__u64 *probe_offset, __u64 *probe_addr)
+{
+	int is_return_bit = bpf_get_retprobe_bit(event_type);
+	int type = bpf_find_probe_type(event_type);
+	struct perf_event_attr attr = {};
+	int fd;
+
+	if (type < 0 || is_return_bit < 0) {
+		printf("FAIL: %s incorrect type (%d) or is_return_bit (%d)\n",
+			__func__, type, is_return_bit);
+		return -1;
+	}
+
+	attr.sample_period = 1;
+	attr.wakeup_events = 1;
+	if (is_return)
+		attr.config |= 1 << is_return_bit;
+
+	if (name) {
+		attr.config1 = ptr_to_u64((void *)name);
+		attr.config2 = offset;
+	} else {
+		attr.config1 = 0;
+		attr.config2 = addr;
+	}
+	attr.size = sizeof(attr);
+	attr.type = type;
+
+	fd = sys_perf_event_open(&attr, -1, 0, -1, 0);
+	CHECK_PERROR_RET(fd < 0);
+
+	CHECK_PERROR_RET(ioctl(fd, PERF_EVENT_IOC_ENABLE, 0) < 0);
+	CHECK_PERROR_RET(ioctl(fd, PERF_EVENT_IOC_SET_BPF, prog_fd[0]) < 0);
+	CHECK_PERROR_RET(bpf_trace_event_query(getpid(), fd, buf, buf_len,
+		prog_id, prog_info, probe_offset, probe_addr) < 0);
+
+	return 0;
+}
+
+static int test_nondebug_fs_probe(const char *event_type, const char *name,
+				  __u64 offset, __u64 addr, bool is_return,
+				  __u32 expected_prog_info,
+				  __u32 expected_ret_prog_info,
+				  char *buf, int buf_len)
+{
+	__u64 probe_offset, probe_addr;
+	__u32 prog_id, prog_info;
+	int err;
+
+	err = test_nondebug_fs_kuprobe_common(event_type, name,
+		offset, addr, is_return,
+		buf, buf_len, &prog_id, &prog_info, &probe_offset,
+		&probe_addr);
+	if (err < 0) {
+		printf("FAIL: %s, "
+		       "for name %s, offset 0x%llx, addr 0x%llx, is_return %d\n",
+		       __func__, name ? name : "", offset, addr, is_return);
+		perror("    :");
+		return -1;
+	}
+	if ((is_return && prog_info != expected_ret_prog_info) ||
+	    (!is_return && prog_info != expected_prog_info)) {
+		printf("FAIL: %s, incorrect prog_info %u\n",
+		       __func__, prog_info);
+		return -1;
+	}
+	if (name) {
+		if (strcmp(name, buf) != 0) {
+			printf("FAIL: %s, incorrect buf %s\n", __func__, buf);
+			return -1;
+		}
+		if (probe_offset != offset) {
+			printf("FAIL: %s, incorrect probe_offset 0x%llx\n",
+			       __func__, probe_offset);
+			return -1;
+		}
+	} else {
+		if (buf && buf[0] != '\0') {
+			printf("FAIL: %s, incorrect buf %p\n",
+			       __func__, buf);
+			return -1;
+		}
+
+		if (probe_addr != addr) {
+			printf("FAIL: %s, incorrect probe_addr 0x%llx\n",
+			       __func__, probe_addr);
+			return -1;
+		}
+	}
+	return 0;
+}
+
+static int test_debug_fs_uprobe(char *binary_path, long offset, bool is_return)
+{
+	struct perf_event_attr attr = {};
+	const char *event_type = "uprobe";
+	char buf[256], event_alias[256];
+	__u64 probe_offset, probe_addr;
+	__u32 prog_id, prog_info;
+	int err, res, kfd, efd;
+	ssize_t bytes;
+
+	snprintf(buf, sizeof(buf), "/sys/kernel/debug/tracing/%s_events",
+		 event_type);
+	kfd = open(buf, O_WRONLY | O_APPEND, 0);
+	CHECK_PERROR_RET(kfd < 0);
+
+	res = snprintf(event_alias, sizeof(event_alias), "test_%d", getpid());
+	CHECK_PERROR_RET(res < 0 || res >= sizeof(event_alias));
+
+	res = snprintf(buf, sizeof(buf), "%c:%ss/%s %s:0x%lx",
+		       is_return ? 'r' : 'p', event_type, event_alias,
+		       binary_path, offset);
+	CHECK_PERROR_RET(res < 0 || res >= sizeof(buf));
+	CHECK_PERROR_RET(write(kfd, buf, strlen(buf)) < 0);
+
+	close(kfd);
+	kfd = -1;
+
+	snprintf(buf, sizeof(buf), "/sys/kernel/debug/tracing/events/%ss/%s/id",
+		 event_type, event_alias);
+	efd = open(buf, O_RDONLY, 0);
+	CHECK_PERROR_RET(efd < 0);
+
+	bytes = read(efd, buf, sizeof(buf));
+	CHECK_PERROR_RET(bytes <= 0 || bytes >= sizeof(buf));
+	close(efd);
+	buf[bytes] = '\0';
+
+	attr.config = strtol(buf, NULL, 0);
+	attr.type = PERF_TYPE_TRACEPOINT;
+	attr.sample_period = 1;
+	attr.wakeup_events = 1;
+	kfd = sys_perf_event_open(&attr, -1, 0, -1, PERF_FLAG_FD_CLOEXEC);
+	CHECK_PERROR_RET(kfd < 0);
+	CHECK_PERROR_RET(ioctl(kfd, PERF_EVENT_IOC_SET_BPF, prog_fd[0]) < 0);
+	CHECK_PERROR_RET(ioctl(kfd, PERF_EVENT_IOC_ENABLE, 0) < 0);
+
+	err = bpf_trace_event_query(getpid(), kfd, buf, 256,
+		&prog_id, &prog_info, &probe_offset, &probe_addr);
+	if (err < 0) {
+		printf("FAIL: %s, binary_path %s\n", __func__, binary_path);
+		perror("    :");
+		return -1;
+	}
+	if ((is_return && prog_info != BPF_PERF_INFO_URETPROBE) ||
+	    (!is_return && prog_info != BPF_PERF_INFO_UPROBE)) {
+		printf("FAIL: %s, incorrect prog_info %u\n", __func__,
+		       prog_info);
+		return -1;
+	}
+	if (strcmp(binary_path, buf) != 0) {
+		printf("FAIL: %s, incorrect buf %s\n", __func__, buf);
+		return -1;
+	}
+	if (probe_offset != offset) {
+		printf("FAIL: %s, incorrect probe_offset 0x%llx\n", __func__,
+		       probe_offset);
+		return -1;
+	}
+
+	close(kfd);
+	return 0;
+}
+
+int main(int argc, char **argv)
+{
+	struct rlimit r = {1024*1024, RLIM_INFINITY};
+	extern char __executable_start;
+	__u64 uprobe_file_offset;
+	char filename[256], buf[256];
+
+	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
+	if (setrlimit(RLIMIT_MEMLOCK, &r)) {
+		perror("setrlimit(RLIMIT_MEMLOCK)");
+		return 1;
+	}
+
+	if (load_kallsyms()) {
+		printf("failed to process /proc/kallsyms\n");
+		return 1;
+	}
+
+	if (load_bpf_file(filename)) {
+		printf("%s", bpf_log_buf);
+		return 1;
+	}
+
+	/* test two functions in the corresponding *_kern.c file */
+	CHECK_AND_RET(test_debug_fs_kprobe(0, "blk_start_request",
+					   BPF_PERF_INFO_KPROBE));
+	CHECK_AND_RET(test_debug_fs_kprobe(1, "blk_account_io_completion",
+					   BPF_PERF_INFO_KRETPROBE));
+
+	/* test nondebug fs kprobe */
+	CHECK_AND_RET(test_nondebug_fs_probe("kprobe", "bpf_check", 0x0, 0x0,
+					     false, BPF_PERF_INFO_KPROBE,
+					     BPF_PERF_INFO_KRETPROBE,
+					     buf, sizeof(buf)));
+#ifdef __x86_64__
+	/* set a kprobe on "bpf_check + 0x5", which is x64 specific */
+	CHECK_AND_RET(test_nondebug_fs_probe("kprobe", "bpf_check", 0x5, 0x0,
+					     false, BPF_PERF_INFO_KPROBE,
+					     BPF_PERF_INFO_KRETPROBE,
+					     buf, sizeof(buf)));
+#endif
+	CHECK_AND_RET(test_nondebug_fs_probe("kprobe", "bpf_check", 0x0, 0x0,
+					     true, BPF_PERF_INFO_KPROBE,
+					     BPF_PERF_INFO_KRETPROBE,
+					     buf, sizeof(buf)));
+	CHECK_AND_RET(test_nondebug_fs_probe("kprobe", NULL, 0x0,
+					     ksym_get_addr("bpf_check"), false,
+					     BPF_PERF_INFO_KPROBE,
+					     BPF_PERF_INFO_KRETPROBE,
+					     buf, sizeof(buf)));
+	CHECK_AND_RET(test_nondebug_fs_probe("kprobe", NULL, 0x0,
+					     ksym_get_addr("bpf_check"), false,
+					     BPF_PERF_INFO_KPROBE,
+					     BPF_PERF_INFO_KRETPROBE,
+					     NULL, 0));
+	CHECK_AND_RET(test_nondebug_fs_probe("kprobe", NULL, 0x0,
+					     ksym_get_addr("bpf_check"), true,
+					     BPF_PERF_INFO_KPROBE,
+					     BPF_PERF_INFO_KRETPROBE,
+					     buf, sizeof(buf)));
+	CHECK_AND_RET(test_nondebug_fs_probe("kprobe", NULL, 0x0,
+					     ksym_get_addr("bpf_check"), true,
+					     BPF_PERF_INFO_KPROBE,
+					     BPF_PERF_INFO_KRETPROBE,
+					     0, 0));
+
+	/* test nondebug fs uprobe */
+	/* the calculation of uprobe file offset is based on gcc 7.3.1 on x64
+	 * and the default linker script, which defines __executable_start as
+	 * the start of the .text section. The calculation could be different
+	 * on different systems with different compilers. The right way is
+	 * to parse the ELF file. We took a shortcut here.
+	 */
+	uprobe_file_offset = (__u64)main - (__u64)&__executable_start;
+	CHECK_AND_RET(test_nondebug_fs_probe("uprobe", (char *)argv[0],
+					     uprobe_file_offset, 0x0, false,
+					     BPF_PERF_INFO_UPROBE,
+					     BPF_PERF_INFO_URETPROBE,
+					     buf, sizeof(buf)));
+	CHECK_AND_RET(test_nondebug_fs_probe("uprobe", (char *)argv[0],
+					     uprobe_file_offset, 0x0, true,
+					     BPF_PERF_INFO_UPROBE,
+					     BPF_PERF_INFO_URETPROBE,
+					     buf, sizeof(buf)));
+
+	/* test debug fs uprobe */
+	CHECK_AND_RET(test_debug_fs_uprobe((char *)argv[0], uprobe_file_offset,
+					   false));
+	CHECK_AND_RET(test_debug_fs_uprobe((char *)argv[0], uprobe_file_offset,
+					   true));
+
+	return 0;
+}
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH bpf-next 6/7] tools/bpf: add two BPF_PERF_EVENT_QUERY tests in test_progs
  2018-05-15 23:45 [PATCH bpf-next 0/7] bpf: implement BPF_PERF_EVENT_QUERY for perf event query Yonghong Song
                   ` (4 preceding siblings ...)
  2018-05-15 23:45 ` [PATCH bpf-next 5/7] samples/bpf: add a samples/bpf test for BPF_PERF_EVENT_QUERY Yonghong Song
@ 2018-05-15 23:45 ` Yonghong Song
  2018-05-15 23:45 ` [PATCH bpf-next 7/7] tools/bpftool: add perf subcommand Yonghong Song
  6 siblings, 0 replies; 15+ messages in thread
From: Yonghong Song @ 2018-05-15 23:45 UTC (permalink / raw)
  To: peterz, ast, daniel, netdev; +Cc: kernel-team

The new tests are added to query perf_event information
for raw_tracepoint and tracepoint attachment. For tracepoint,
both syscalls and non-syscalls tracepoints are queries as
they are treated slightly differently inside the kernel.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 tools/testing/selftests/bpf/test_progs.c | 133 +++++++++++++++++++++++++++++++
 1 file changed, 133 insertions(+)

diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
index 3ecf733..138d1e9 100644
--- a/tools/testing/selftests/bpf/test_progs.c
+++ b/tools/testing/selftests/bpf/test_progs.c
@@ -1542,6 +1542,137 @@ static void test_get_stack_raw_tp(void)
 	bpf_object__close(obj);
 }
 
+static void test_query_trace_event_rawtp(void)
+{
+	const char *file = "./test_get_stack_rawtp.o";
+	struct perf_event_attr attr = {};
+	int efd, err, prog_fd, pmu_fd;
+	struct bpf_object *obj;
+	__u32 duration = 0;
+	char buf[256];
+	__u32 prog_id, prog_info;
+	__u64 probe_offset, probe_addr;
+
+	err = bpf_prog_load(file, BPF_PROG_TYPE_RAW_TRACEPOINT, &obj, &prog_fd);
+	if (CHECK(err, "prog_load raw tp", "err %d errno %d\n", err, errno))
+		return;
+
+	efd = bpf_raw_tracepoint_open("sys_enter", prog_fd);
+	if (CHECK(efd < 0, "raw_tp_open", "err %d errno %d\n", efd, errno))
+		goto close_prog;
+
+	attr.sample_type = PERF_SAMPLE_RAW;
+	attr.type = PERF_TYPE_SOFTWARE;
+	attr.config = PERF_COUNT_SW_BPF_OUTPUT;
+	pmu_fd = syscall(__NR_perf_event_open, &attr, getpid(), -1, -1, 0);
+	if (CHECK(pmu_fd < 0, "perf_event_open", "err %d errno %d\n", pmu_fd,
+		  errno))
+		goto close_prog;
+
+	err = ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0);
+	if (CHECK(err < 0, "ioctl PERF_EVENT_IOC_ENABLE", "err %d errno %d\n",
+		  err, errno))
+		goto close_prog;
+
+	/* query (getpid(), efd */
+	err = bpf_trace_event_query(getpid(), efd, buf, 256, &prog_id,
+				    &prog_info, &probe_offset, &probe_addr);
+	if (CHECK(err < 0, "bpf_trace_event_query", "err %d errno %d\n", err,
+		  errno))
+		goto close_prog;
+
+	err = (prog_info == BPF_PERF_INFO_TP_NAME) &&
+	      (strcmp(buf, "sys_enter") == 0);
+	if (CHECK(!err, "check_results", "prog_info %d tp_name %s\n",
+		  prog_info, buf))
+		goto close_prog;
+
+	goto close_prog_noerr;
+close_prog:
+	error_cnt++;
+close_prog_noerr:
+	bpf_object__close(obj);
+}
+
+static void test_query_trace_event_tp_core(const char *probe_name,
+					   const char *tp_name)
+{
+	const char *file = "./test_tracepoint.o";
+	int err, bytes, efd, prog_fd, pmu_fd;
+	struct perf_event_attr attr = {};
+	struct bpf_object *obj;
+	__u32 duration = 0;
+	char buf[256];
+	__u32 prog_id, prog_info;
+	__u64 probe_offset, probe_addr;
+
+	err = bpf_prog_load(file, BPF_PROG_TYPE_TRACEPOINT, &obj, &prog_fd);
+	if (CHECK(err, "bpf_prog_load", "err %d errno %d\n", err, errno))
+		goto close_prog;
+
+	snprintf(buf, sizeof(buf),
+		 "/sys/kernel/debug/tracing/events/%s/id", probe_name);
+	efd = open(buf, O_RDONLY, 0);
+	if (CHECK(efd < 0, "open", "err %d errno %d\n", efd, errno))
+		goto close_prog;
+	bytes = read(efd, buf, sizeof(buf));
+	close(efd);
+	if (CHECK(bytes <= 0 || bytes >= sizeof(buf), "read",
+		  "bytes %d errno %d\n", bytes, errno))
+		goto close_prog;
+
+	attr.config = strtol(buf, NULL, 0);
+	attr.type = PERF_TYPE_TRACEPOINT;
+	attr.sample_type = PERF_SAMPLE_RAW;
+	attr.sample_period = 1;
+	attr.wakeup_events = 1;
+	pmu_fd = syscall(__NR_perf_event_open, &attr, -1 /* pid */,
+			 0 /* cpu 0 */, -1 /* group id */,
+			 0 /* flags */);
+	if (CHECK(err, "perf_event_open", "err %d errno %d\n", err, errno))
+		goto close_pmu;
+
+	err = ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0);
+	if (CHECK(err, "perf_event_ioc_enable", "err %d errno %d\n", err,
+		  errno))
+		goto close_pmu;
+
+	err = ioctl(pmu_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
+	if (CHECK(err, "perf_event_ioc_set_bpf", "err %d errno %d\n", err,
+		  errno))
+		goto close_pmu;
+
+	/* query (getpid(), pmu_fd */
+	err = bpf_trace_event_query(getpid(), pmu_fd, buf, 256, &prog_id,
+				    &prog_info, &probe_offset, &probe_addr);
+	if (CHECK(err < 0, "bpf_trace_event_query", "err %d errno %d\n", err,
+		  errno))
+		goto close_pmu;
+
+	err = (prog_info == BPF_PERF_INFO_TP_NAME) && !strcmp(buf, tp_name);
+	if (CHECK(!err, "check_results", "prog_info %d tp_name %s\n",
+		  prog_info, buf))
+		goto close_pmu;
+
+	close(pmu_fd);
+	goto close_prog_noerr;
+
+close_pmu:
+	close(pmu_fd);
+close_prog:
+	error_cnt++;
+close_prog_noerr:
+	bpf_object__close(obj);
+}
+
+static void test_query_trace_event_tp(void)
+{
+	test_query_trace_event_tp_core("sched/sched_switch",
+				       "sched_switch");
+	test_query_trace_event_tp_core("syscalls/sys_enter_read",
+				       "sys_enter_read");
+}
+
 int main(void)
 {
 	jit_enabled = is_jit_enabled();
@@ -1561,6 +1692,8 @@ int main(void)
 	test_stacktrace_build_id_nmi();
 	test_stacktrace_map_raw_tp();
 	test_get_stack_raw_tp();
+	test_query_trace_event_rawtp();
+	test_query_trace_event_tp();
 
 	printf("Summary: %d PASSED, %d FAILED\n", pass_cnt, error_cnt);
 	return error_cnt ? EXIT_FAILURE : EXIT_SUCCESS;
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH bpf-next 7/7] tools/bpftool: add perf subcommand
  2018-05-15 23:45 [PATCH bpf-next 0/7] bpf: implement BPF_PERF_EVENT_QUERY for perf event query Yonghong Song
                   ` (5 preceding siblings ...)
  2018-05-15 23:45 ` [PATCH bpf-next 6/7] tools/bpf: add two BPF_PERF_EVENT_QUERY tests in test_progs Yonghong Song
@ 2018-05-15 23:45 ` Yonghong Song
  2018-05-16  4:41   ` Jakub Kicinski
  6 siblings, 1 reply; 15+ messages in thread
From: Yonghong Song @ 2018-05-15 23:45 UTC (permalink / raw)
  To: peterz, ast, daniel, netdev; +Cc: kernel-team

The new command "bpftool perf [show]" will traverse
all processes under /proc, and if any fd is associated
with a perf event, it will print out related perf event
information.

Below is an example to show the results using bcc commands.
Running the following 4 bcc commands:
  kprobe:     trace.py '__x64_sys_nanosleep'
  kretprobe:  trace.py 'r::__x64_sys_nanosleep'
  tracepoint: trace.py 't:syscalls:sys_enter_nanosleep'
  uprobe:     trace.py 'p:/home/yhs/a.out:main'

The bpftool command line and result:

  $ bpftool perf
  21711: prog_id 5 kprobe func __x64_sys_write offset 0
  21765: prog_id 7 kretprobe func __x64_sys_nanosleep offset 0
  21767: prog_id 8 tracepoint sys_enter_nanosleep
  21800: prog_id 9 uprobe filename /home/yhs/a.out offset 1159

  $ bpftool -j perf
  {"pid":21711,"prog_id":5,"prog_info":"kprobe","func":"__x64_sys_write","offset":0}, \
  {"pid":21765,"prog_id":7,"prog_info":"kretprobe","func":"__x64_sys_nanosleep","offset":0}, \
  {"pid":21767,"prog_id":8,"prog_info":"tracepoint","tracepoint":"sys_enter_nanosleep"}, \
  {"pid":21800,"prog_id":9,"prog_info":"uprobe","filename":"/home/yhs/a.out","offset":1159}

  $ bpftool prog
  5: kprobe  name probe___x64_sys  tag e495a0c82f2c7a8d  gpl
	  loaded_at 2018-05-15T04:46:37-0700  uid 0
	  xlated 200B  not jited  memlock 4096B  map_ids 4
  7: kprobe  name probe___x64_sys  tag f2fdee479a503abf  gpl
	  loaded_at 2018-05-15T04:48:32-0700  uid 0
	  xlated 200B  not jited  memlock 4096B  map_ids 7
  8: tracepoint  name tracepoint__sys  tag 5390badef2395fcf  gpl
	  loaded_at 2018-05-15T04:48:48-0700  uid 0
	  xlated 200B  not jited  memlock 4096B  map_ids 8
  9: kprobe  name probe_main_1  tag 0a87bdc2e2953b6d  gpl
	  loaded_at 2018-05-15T04:49:52-0700  uid 0
	  xlated 200B  not jited  memlock 4096B  map_ids 9

  $ ps ax | grep "python ./trace.py"
  21711 pts/0    T      0:03 python ./trace.py __x64_sys_write
  21765 pts/0    S+     0:00 python ./trace.py r::__x64_sys_nanosleep
  21767 pts/2    S+     0:00 python ./trace.py t:syscalls:sys_enter_nanosleep
  21800 pts/3    S+     0:00 python ./trace.py p:/home/yhs/a.out:main
  22374 pts/1    S+     0:00 grep --color=auto python ./trace.py

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 tools/bpf/bpftool/main.c |   3 +-
 tools/bpf/bpftool/main.h |   1 +
 tools/bpf/bpftool/perf.c | 188 +++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 191 insertions(+), 1 deletion(-)
 create mode 100644 tools/bpf/bpftool/perf.c

diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c
index 1ec852d..eea7f14 100644
--- a/tools/bpf/bpftool/main.c
+++ b/tools/bpf/bpftool/main.c
@@ -87,7 +87,7 @@ static int do_help(int argc, char **argv)
 		"       %s batch file FILE\n"
 		"       %s version\n"
 		"\n"
-		"       OBJECT := { prog | map | cgroup }\n"
+		"       OBJECT := { prog | map | cgroup | perf }\n"
 		"       " HELP_SPEC_OPTIONS "\n"
 		"",
 		bin_name, bin_name, bin_name);
@@ -216,6 +216,7 @@ static const struct cmd cmds[] = {
 	{ "prog",	do_prog },
 	{ "map",	do_map },
 	{ "cgroup",	do_cgroup },
+	{ "perf",	do_perf },
 	{ "version",	do_version },
 	{ 0 }
 };
diff --git a/tools/bpf/bpftool/main.h b/tools/bpf/bpftool/main.h
index 6173cd9..63fdb31 100644
--- a/tools/bpf/bpftool/main.h
+++ b/tools/bpf/bpftool/main.h
@@ -119,6 +119,7 @@ int do_prog(int argc, char **arg);
 int do_map(int argc, char **arg);
 int do_event_pipe(int argc, char **argv);
 int do_cgroup(int argc, char **arg);
+int do_perf(int argc, char **arg);
 
 int prog_parse_fd(int *argc, char ***argv);
 int map_parse_fd_and_info(int *argc, char ***argv, void *info, __u32 *info_len);
diff --git a/tools/bpf/bpftool/perf.c b/tools/bpf/bpftool/perf.c
new file mode 100644
index 0000000..6d676e4
--- /dev/null
+++ b/tools/bpf/bpftool/perf.c
@@ -0,0 +1,188 @@
+// SPDX-License-Identifier: GPL-2.0+
+// Copyright (C) 2018 Facebook
+// Author: Yonghong Song <yhs@fb.com>
+
+#define _GNU_SOURCE
+#include <fcntl.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/stat.h>
+#include <sys/types.h>
+#include <unistd.h>
+#include <ftw.h>
+
+#include <bpf.h>
+
+#include "main.h"
+
+static void print_perf_json(int pid, __u32 prog_id, __u32 prog_info,
+			    char *buf, __u64 probe_offset, __u64 probe_addr)
+{
+	jsonw_start_object(json_wtr);
+	jsonw_int_field(json_wtr, "pid", pid);
+	jsonw_uint_field(json_wtr, "prog_id", prog_id);
+	switch (prog_info) {
+	case BPF_PERF_INFO_TP_NAME:
+		jsonw_string_field(json_wtr, "prog_info", "tracepoint");
+		jsonw_string_field(json_wtr, "tracepoint", buf);
+		break;
+	case BPF_PERF_INFO_KPROBE:
+		jsonw_string_field(json_wtr, "prog_info", "kprobe");
+		if (buf[0] != '\0') {
+			jsonw_string_field(json_wtr, "func", buf);
+			jsonw_lluint_field(json_wtr, "offset", probe_offset);
+		} else {
+			jsonw_lluint_field(json_wtr, "addr", probe_addr);
+		}
+		break;
+	case BPF_PERF_INFO_KRETPROBE:
+		jsonw_string_field(json_wtr, "prog_info", "kretprobe");
+		if (buf[0] != '\0') {
+			jsonw_string_field(json_wtr, "func", buf);
+			jsonw_lluint_field(json_wtr, "offset", probe_offset);
+		} else {
+			jsonw_lluint_field(json_wtr, "addr", probe_addr);
+		}
+		break;
+	case BPF_PERF_INFO_UPROBE:
+		jsonw_string_field(json_wtr, "prog_info", "uprobe");
+		jsonw_string_field(json_wtr, "filename", buf);
+		jsonw_lluint_field(json_wtr, "offset", probe_offset);
+		break;
+	case BPF_PERF_INFO_URETPROBE:
+		jsonw_string_field(json_wtr, "prog_info", "uretprobe");
+		jsonw_string_field(json_wtr, "filename", buf);
+		jsonw_lluint_field(json_wtr, "offset", probe_offset);
+		break;
+	}
+	jsonw_end_object(json_wtr);
+}
+
+static void print_perf_plain(int pid, __u32 prog_id, __u32 prog_info,
+			    char *buf, __u64 probe_offset, __u64 probe_addr)
+{
+	printf("%d: prog_id %u ", pid, prog_id);
+	switch (prog_info) {
+	case BPF_PERF_INFO_TP_NAME:
+		printf("tracepoint %s\n", buf);
+		break;
+	case BPF_PERF_INFO_KPROBE:
+		if (buf[0] != '\0')
+			printf("kprobe func %s offset %llu\n", buf,
+			       probe_offset);
+		else
+			printf("kprobe addr %llu\n", probe_addr);
+		break;
+	case BPF_PERF_INFO_KRETPROBE:
+		if (buf[0] != '\0')
+			printf("kretprobe func %s offset %llu\n", buf,
+			       probe_offset);
+		else
+			printf("kretprobe addr %llu\n", probe_addr);
+		break;
+	case BPF_PERF_INFO_UPROBE:
+		printf("uprobe filename %s offset %llu\n", buf, probe_offset);
+		break;
+	case BPF_PERF_INFO_URETPROBE:
+		printf("uretprobe filename %s offset %llu\n", buf,
+		       probe_offset);
+		break;
+	}
+}
+
+static int show_proc(const char *fpath, const struct stat *sb,
+		     int tflag, struct FTW *ftwbuf)
+{
+	__u64 probe_offset, probe_addr;
+	__u32 prog_id, prog_info;
+	int err, pid = 0, fd = 0;
+	const char *pch;
+	char buf[4096];
+
+	/* prefix always /proc */
+	pch = fpath + 5;
+	if (*pch == '\0')
+		return 0;
+
+	/* pid should be all numbers */
+	pch++;
+	while (*pch >= '0' && *pch <= '9') {
+		pid = pid * 10 + *pch - '0';
+		pch++;
+	}
+	if (*pch == '\0')
+		return 0;
+	if (*pch != '/')
+		return FTW_SKIP_SUBTREE;
+
+	/* check /proc/<pid>/fd directory */
+	pch++;
+	if (*pch == '\0' || *pch != 'f')
+		return FTW_SKIP_SUBTREE;
+	pch++;
+	if (*pch == '\0' || *pch != 'd')
+		return FTW_SKIP_SUBTREE;
+	pch++;
+	if (*pch == '\0')
+		return 0;
+	if (*pch != '/')
+		return FTW_SKIP_SUBTREE;
+
+	/* check /proc/<pid>/fd/<fd_num> */
+	pch++;
+	while (*pch >= '0' && *pch <= '9') {
+		fd = fd * 10 + *pch - '0';
+		pch++;
+	}
+	if (*pch != '\0')
+		return FTW_SKIP_SUBTREE;
+
+	/* query (pid, fd) for potential perf events */
+	err = bpf_trace_event_query(pid, fd, buf, sizeof(buf),
+		&prog_id, &prog_info, &probe_offset, &probe_addr);
+	if (err < 0)
+		return 0;
+
+	if (json_output)
+		print_perf_json(pid, prog_id, prog_info, buf, probe_offset,
+				probe_addr);
+	else
+		print_perf_plain(pid, prog_id, prog_info, buf, probe_offset,
+				 probe_addr);
+
+	return 0;
+}
+
+static int do_show(int argc, char **argv)
+{
+	int nopenfd = 16;
+	int flags = FTW_ACTIONRETVAL | FTW_PHYS;
+
+	if (nftw("/proc", show_proc, nopenfd, flags) == -1) {
+		perror("nftw");
+		return -1;
+	}
+
+	return 0;
+}
+
+static int do_help(int argc, char **argv)
+{
+	fprintf(stderr,
+		"Usage: %s %s { show | help }\n"
+		"",
+		bin_name, argv[-2]);
+
+	return 0;
+}
+
+static const struct cmd cmds[] = {
+	{ "show",	do_show },
+	{ "help",	do_help },
+	{ 0 }
+};
+
+int do_perf(int argc, char **argv)
+{
+	return cmd_select(cmds, argc, argv, do_help);
+}
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next 7/7] tools/bpftool: add perf subcommand
  2018-05-15 23:45 ` [PATCH bpf-next 7/7] tools/bpftool: add perf subcommand Yonghong Song
@ 2018-05-16  4:41   ` Jakub Kicinski
  2018-05-16  5:54     ` Yonghong Song
  0 siblings, 1 reply; 15+ messages in thread
From: Jakub Kicinski @ 2018-05-16  4:41 UTC (permalink / raw)
  To: Yonghong Song; +Cc: peterz, ast, daniel, netdev, kernel-team, Quentin Monnet

On Tue, 15 May 2018 16:45:21 -0700, Yonghong Song wrote:
> The new command "bpftool perf [show]" will traverse
> all processes under /proc, and if any fd is associated
> with a perf event, it will print out related perf event
> information.
> 
> Below is an example to show the results using bcc commands.
> Running the following 4 bcc commands:
>   kprobe:     trace.py '__x64_sys_nanosleep'
>   kretprobe:  trace.py 'r::__x64_sys_nanosleep'
>   tracepoint: trace.py 't:syscalls:sys_enter_nanosleep'
>   uprobe:     trace.py 'p:/home/yhs/a.out:main'
> 
> The bpftool command line and result:
> 
>   $ bpftool perf
>   21711: prog_id 5 kprobe func __x64_sys_write offset 0
>   21765: prog_id 7 kretprobe func __x64_sys_nanosleep offset 0
>   21767: prog_id 8 tracepoint sys_enter_nanosleep
>   21800: prog_id 9 uprobe filename /home/yhs/a.out offset 1159
> 
>   $ bpftool -j perf
>   {"pid":21711,"prog_id":5,"prog_info":"kprobe","func":"__x64_sys_write","offset":0}, \
>   {"pid":21765,"prog_id":7,"prog_info":"kretprobe","func":"__x64_sys_nanosleep","offset":0}, \
>   {"pid":21767,"prog_id":8,"prog_info":"tracepoint","tracepoint":"sys_enter_nanosleep"}, \
>   {"pid":21800,"prog_id":9,"prog_info":"uprobe","filename":"/home/yhs/a.out","offset":1159}

You need to wrap the objects inside an array, so

	if (json_output)
		jsonw_start_array(json_wtr);
	nftw();
	if (json_output)
		jsonw_end_array(json_wtr);

otherwise output will not be a valid JSON.  To validate JSON try:

$ bpftool -j perf | python -m json.tool

>   $ bpftool prog
>   5: kprobe  name probe___x64_sys  tag e495a0c82f2c7a8d  gpl
> 	  loaded_at 2018-05-15T04:46:37-0700  uid 0
> 	  xlated 200B  not jited  memlock 4096B  map_ids 4
>   7: kprobe  name probe___x64_sys  tag f2fdee479a503abf  gpl
> 	  loaded_at 2018-05-15T04:48:32-0700  uid 0
> 	  xlated 200B  not jited  memlock 4096B  map_ids 7
>   8: tracepoint  name tracepoint__sys  tag 5390badef2395fcf  gpl
> 	  loaded_at 2018-05-15T04:48:48-0700  uid 0
> 	  xlated 200B  not jited  memlock 4096B  map_ids 8
>   9: kprobe  name probe_main_1  tag 0a87bdc2e2953b6d  gpl
> 	  loaded_at 2018-05-15T04:49:52-0700  uid 0
> 	  xlated 200B  not jited  memlock 4096B  map_ids 9
> 
>   $ ps ax | grep "python ./trace.py"
>   21711 pts/0    T      0:03 python ./trace.py __x64_sys_write
>   21765 pts/0    S+     0:00 python ./trace.py r::__x64_sys_nanosleep
>   21767 pts/2    S+     0:00 python ./trace.py t:syscalls:sys_enter_nanosleep
>   21800 pts/3    S+     0:00 python ./trace.py p:/home/yhs/a.out:main
>   22374 pts/1    S+     0:00 grep --color=auto python ./trace.py
> 
> Signed-off-by: Yonghong Song <yhs@fb.com>
> ---
>  tools/bpf/bpftool/main.c |   3 +-
>  tools/bpf/bpftool/main.h |   1 +
>  tools/bpf/bpftool/perf.c | 188 +++++++++++++++++++++++++++++++++++++++++++++++

Would you be able to also extend the Documentation/ and bash
completions?

>  3 files changed, 191 insertions(+), 1 deletion(-)
>  create mode 100644 tools/bpf/bpftool/perf.c
> 
> diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c
> index 1ec852d..eea7f14 100644
> --- a/tools/bpf/bpftool/main.c
> +++ b/tools/bpf/bpftool/main.c
> @@ -87,7 +87,7 @@ static int do_help(int argc, char **argv)
>  		"       %s batch file FILE\n"
>  		"       %s version\n"
>  		"\n"
> -		"       OBJECT := { prog | map | cgroup }\n"
> +		"       OBJECT := { prog | map | cgroup | perf }\n"
>  		"       " HELP_SPEC_OPTIONS "\n"
>  		"",
>  		bin_name, bin_name, bin_name);
> @@ -216,6 +216,7 @@ static const struct cmd cmds[] = {
>  	{ "prog",	do_prog },
>  	{ "map",	do_map },
>  	{ "cgroup",	do_cgroup },
> +	{ "perf",	do_perf },
>  	{ "version",	do_version },
>  	{ 0 }
>  };
> diff --git a/tools/bpf/bpftool/main.h b/tools/bpf/bpftool/main.h
> index 6173cd9..63fdb31 100644
> --- a/tools/bpf/bpftool/main.h
> +++ b/tools/bpf/bpftool/main.h
> @@ -119,6 +119,7 @@ int do_prog(int argc, char **arg);
>  int do_map(int argc, char **arg);
>  int do_event_pipe(int argc, char **argv);
>  int do_cgroup(int argc, char **arg);
> +int do_perf(int argc, char **arg);
>  
>  int prog_parse_fd(int *argc, char ***argv);
>  int map_parse_fd_and_info(int *argc, char ***argv, void *info, __u32 *info_len);
> diff --git a/tools/bpf/bpftool/perf.c b/tools/bpf/bpftool/perf.c
> new file mode 100644
> index 0000000..6d676e4
> --- /dev/null
> +++ b/tools/bpf/bpftool/perf.c
> @@ -0,0 +1,188 @@
> +// SPDX-License-Identifier: GPL-2.0+
> +// Copyright (C) 2018 Facebook
> +// Author: Yonghong Song <yhs@fb.com>
> +
> +#define _GNU_SOURCE
> +#include <fcntl.h>
> +#include <stdlib.h>
> +#include <string.h>
> +#include <sys/stat.h>
> +#include <sys/types.h>
> +#include <unistd.h>
> +#include <ftw.h>
> +
> +#include <bpf.h>
> +
> +#include "main.h"
> +
> +static void print_perf_json(int pid, __u32 prog_id, __u32 prog_info,
> +			    char *buf, __u64 probe_offset, __u64 probe_addr)
> +{
> +	jsonw_start_object(json_wtr);
> +	jsonw_int_field(json_wtr, "pid", pid);
> +	jsonw_uint_field(json_wtr, "prog_id", prog_id);
> +	switch (prog_info) {
> +	case BPF_PERF_INFO_TP_NAME:
> +		jsonw_string_field(json_wtr, "prog_info", "tracepoint");
> +		jsonw_string_field(json_wtr, "tracepoint", buf);
> +		break;
> +	case BPF_PERF_INFO_KPROBE:
> +		jsonw_string_field(json_wtr, "prog_info", "kprobe");
> +		if (buf[0] != '\0') {
> +			jsonw_string_field(json_wtr, "func", buf);
> +			jsonw_lluint_field(json_wtr, "offset", probe_offset);
> +		} else {
> +			jsonw_lluint_field(json_wtr, "addr", probe_addr);
> +		}
> +		break;
> +	case BPF_PERF_INFO_KRETPROBE:
> +		jsonw_string_field(json_wtr, "prog_info", "kretprobe");
> +		if (buf[0] != '\0') {
> +			jsonw_string_field(json_wtr, "func", buf);
> +			jsonw_lluint_field(json_wtr, "offset", probe_offset);
> +		} else {
> +			jsonw_lluint_field(json_wtr, "addr", probe_addr);
> +		}
> +		break;
> +	case BPF_PERF_INFO_UPROBE:
> +		jsonw_string_field(json_wtr, "prog_info", "uprobe");
> +		jsonw_string_field(json_wtr, "filename", buf);
> +		jsonw_lluint_field(json_wtr, "offset", probe_offset);
> +		break;
> +	case BPF_PERF_INFO_URETPROBE:
> +		jsonw_string_field(json_wtr, "prog_info", "uretprobe");
> +		jsonw_string_field(json_wtr, "filename", buf);
> +		jsonw_lluint_field(json_wtr, "offset", probe_offset);
> +		break;
> +	}
> +	jsonw_end_object(json_wtr);
> +}
> +
> +static void print_perf_plain(int pid, __u32 prog_id, __u32 prog_info,
> +			    char *buf, __u64 probe_offset, __u64 probe_addr)
> +{
> +	printf("%d: prog_id %u ", pid, prog_id);

nit: for consistency with prog and map listings consider using double
spaces after prog_id (i.e. between fields).  Not a hard requirement,
though, perhaps I'm the only one who finds that more readable :)

> +	switch (prog_info) {
> +	case BPF_PERF_INFO_TP_NAME:
> +		printf("tracepoint %s\n", buf);
> +		break;
> +	case BPF_PERF_INFO_KPROBE:
> +		if (buf[0] != '\0')
> +			printf("kprobe func %s offset %llu\n", buf,
> +			       probe_offset);
> +		else
> +			printf("kprobe addr %llu\n", probe_addr);
> +		break;
> +	case BPF_PERF_INFO_KRETPROBE:
> +		if (buf[0] != '\0')
> +			printf("kretprobe func %s offset %llu\n", buf,
> +			       probe_offset);
> +		else
> +			printf("kretprobe addr %llu\n", probe_addr);
> +		break;
> +	case BPF_PERF_INFO_UPROBE:
> +		printf("uprobe filename %s offset %llu\n", buf, probe_offset);
> +		break;
> +	case BPF_PERF_INFO_URETPROBE:
> +		printf("uretprobe filename %s offset %llu\n", buf,
> +		       probe_offset);
> +		break;
> +	}
> +}
> +
> +static int show_proc(const char *fpath, const struct stat *sb,
> +		     int tflag, struct FTW *ftwbuf)
> +{
> +	__u64 probe_offset, probe_addr;
> +	__u32 prog_id, prog_info;
> +	int err, pid = 0, fd = 0;
> +	const char *pch;
> +	char buf[4096];
> +
> +	/* prefix always /proc */
> +	pch = fpath + 5;
> +	if (*pch == '\0')
> +		return 0;
> +
> +	/* pid should be all numbers */
> +	pch++;
> +	while (*pch >= '0' && *pch <= '9') {

nit: isdigit()?  strtoul() with its endptr also an option.  That said
     the code is actually quite readable as is, so I'm not sure if it's
     worth complicating it.

> +		pid = pid * 10 + *pch - '0';
> +		pch++;
> +	}
> +	if (*pch == '\0')
> +		return 0;
> +	if (*pch != '/')
> +		return FTW_SKIP_SUBTREE;
> +
> +	/* check /proc/<pid>/fd directory */
> +	pch++;
> +	if (*pch == '\0' || *pch != 'f')
> +		return FTW_SKIP_SUBTREE;

but == '\0' implies != 'f'

> +	pch++;
> +	if (*pch == '\0' || *pch != 'd')
> +		return FTW_SKIP_SUBTREE;

nit: possibly just:
     if (strncmp(pch, "fd", 2))
          return FTW_SKIP_SUBTREE;
     pch += 2;

> +	pch++;
> +	if (*pch == '\0')
> +		return 0;
> +	if (*pch != '/')
> +		return FTW_SKIP_SUBTREE;
> +
> +	/* check /proc/<pid>/fd/<fd_num> */
> +	pch++;
> +	while (*pch >= '0' && *pch <= '9') {
> +		fd = fd * 10 + *pch - '0';
> +		pch++;
> +	}
> +	if (*pch != '\0')
> +		return FTW_SKIP_SUBTREE;
> +
> +	/* query (pid, fd) for potential perf events */
> +	err = bpf_trace_event_query(pid, fd, buf, sizeof(buf),
> +		&prog_id, &prog_info, &probe_offset, &probe_addr);

nit: continuation line not aligned with opening bracket

> +	if (err < 0)
> +		return 0;
> +
> +	if (json_output)
> +		print_perf_json(pid, prog_id, prog_info, buf, probe_offset,
> +				probe_addr);
> +	else
> +		print_perf_plain(pid, prog_id, prog_info, buf, probe_offset,
> +				 probe_addr);
> +
> +	return 0;
> +}
> +
> +static int do_show(int argc, char **argv)
> +{
> +	int nopenfd = 16;
> +	int flags = FTW_ACTIONRETVAL | FTW_PHYS;

nit: reverse christmas tree networking style if you don't mind

> +	if (nftw("/proc", show_proc, nopenfd, flags) == -1) {
> +		perror("nftw");

nit: p_err("%s", strerror(errno)); would also show up in JSON output

> +		return -1;
> +	}
> +
> +	return 0;
> +}
> +
> +static int do_help(int argc, char **argv)
> +{
> +	fprintf(stderr,
> +		"Usage: %s %s { show | help }\n"
> +		"",
> +		bin_name, argv[-2]);
> +
> +	return 0;
> +}
> +
> +static const struct cmd cmds[] = {
> +	{ "show",	do_show },

Other commands alias show and list, so could you add:

	{ "list",	do_show },

and list to help output?

> +	{ "help",	do_help },
> +	{ 0 }
> +};
> +
> +int do_perf(int argc, char **argv)
> +{
> +	return cmd_select(cmds, argc, argv, do_help);
> +}

Thanks a lot for adding bpftool support, and with JSON output included!

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next 7/7] tools/bpftool: add perf subcommand
  2018-05-16  4:41   ` Jakub Kicinski
@ 2018-05-16  5:54     ` Yonghong Song
  0 siblings, 0 replies; 15+ messages in thread
From: Yonghong Song @ 2018-05-16  5:54 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: peterz, ast, daniel, netdev, kernel-team, Quentin Monnet



On 5/15/18 9:41 PM, Jakub Kicinski wrote:
> On Tue, 15 May 2018 16:45:21 -0700, Yonghong Song wrote:
>> The new command "bpftool perf [show]" will traverse
>> all processes under /proc, and if any fd is associated
>> with a perf event, it will print out related perf event
>> information.
>>
>> Below is an example to show the results using bcc commands.
>> Running the following 4 bcc commands:
>>    kprobe:     trace.py '__x64_sys_nanosleep'
>>    kretprobe:  trace.py 'r::__x64_sys_nanosleep'
>>    tracepoint: trace.py 't:syscalls:sys_enter_nanosleep'
>>    uprobe:     trace.py 'p:/home/yhs/a.out:main'
>>
>> The bpftool command line and result:
>>
>>    $ bpftool perf
>>    21711: prog_id 5 kprobe func __x64_sys_write offset 0
>>    21765: prog_id 7 kretprobe func __x64_sys_nanosleep offset 0
>>    21767: prog_id 8 tracepoint sys_enter_nanosleep
>>    21800: prog_id 9 uprobe filename /home/yhs/a.out offset 1159
>>
>>    $ bpftool -j perf
>>    {"pid":21711,"prog_id":5,"prog_info":"kprobe","func":"__x64_sys_write","offset":0}, \
>>    {"pid":21765,"prog_id":7,"prog_info":"kretprobe","func":"__x64_sys_nanosleep","offset":0}, \
>>    {"pid":21767,"prog_id":8,"prog_info":"tracepoint","tracepoint":"sys_enter_nanosleep"}, \
>>    {"pid":21800,"prog_id":9,"prog_info":"uprobe","filename":"/home/yhs/a.out","offset":1159}
> 
> You need to wrap the objects inside an array, so
> 
> 	if (json_output)
> 		jsonw_start_array(json_wtr);
> 	nftw();
> 	if (json_output)
> 		jsonw_end_array(json_wtr);
> 
> otherwise output will not be a valid JSON.  To validate JSON try:
> 
> $ bpftool -j perf | python -m json.tool

Thanks for detailed review! All of your comments make sense.
I will address them in next revision after getting some feedback
for other patches.

> 
>>    $ bpftool prog
>>    5: kprobe  name probe___x64_sys  tag e495a0c82f2c7a8d  gpl
>> 	  loaded_at 2018-05-15T04:46:37-0700  uid 0
>> 	  xlated 200B  not jited  memlock 4096B  map_ids 4
>>    7: kprobe  name probe___x64_sys  tag f2fdee479a503abf  gpl
>> 	  loaded_at 2018-05-15T04:48:32-0700  uid 0
>> 	  xlated 200B  not jited  memlock 4096B  map_ids 7
>>    8: tracepoint  name tracepoint__sys  tag 5390badef2395fcf  gpl
>> 	  loaded_at 2018-05-15T04:48:48-0700  uid 0
>> 	  xlated 200B  not jited  memlock 4096B  map_ids 8
>>    9: kprobe  name probe_main_1  tag 0a87bdc2e2953b6d  gpl
>> 	  loaded_at 2018-05-15T04:49:52-0700  uid 0
>> 	  xlated 200B  not jited  memlock 4096B  map_ids 9
>>
>>    $ ps ax | grep "python ./trace.py"
>>    21711 pts/0    T      0:03 python ./trace.py __x64_sys_write
>>    21765 pts/0    S+     0:00 python ./trace.py r::__x64_sys_nanosleep
>>    21767 pts/2    S+     0:00 python ./trace.py t:syscalls:sys_enter_nanosleep
>>    21800 pts/3    S+     0:00 python ./trace.py p:/home/yhs/a.out:main
>>    22374 pts/1    S+     0:00 grep --color=auto python ./trace.py
>>
>> Signed-off-by: Yonghong Song <yhs@fb.com>
>> ---
>>   tools/bpf/bpftool/main.c |   3 +-
>>   tools/bpf/bpftool/main.h |   1 +
>>   tools/bpf/bpftool/perf.c | 188 +++++++++++++++++++++++++++++++++++++++++++++++
> 
> Would you be able to also extend the Documentation/ and bash
> completions?
> 
>>   3 files changed, 191 insertions(+), 1 deletion(-)
>>   create mode 100644 tools/bpf/bpftool/perf.c
>>
>> diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c
>> index 1ec852d..eea7f14 100644
>> --- a/tools/bpf/bpftool/main.c
>> +++ b/tools/bpf/bpftool/main.c
>> @@ -87,7 +87,7 @@ static int do_help(int argc, char **argv)
>>   		"       %s batch file FILE\n"
>>   		"       %s version\n"
>>   		"\n"
>> -		"       OBJECT := { prog | map | cgroup }\n"
>> +		"       OBJECT := { prog | map | cgroup | perf }\n"
>>   		"       " HELP_SPEC_OPTIONS "\n"
>>   		"",
>>   		bin_name, bin_name, bin_name);
>> @@ -216,6 +216,7 @@ static const struct cmd cmds[] = {
>>   	{ "prog",	do_prog },
>>   	{ "map",	do_map },
>>   	{ "cgroup",	do_cgroup },
>> +	{ "perf",	do_perf },
>>   	{ "version",	do_version },
>>   	{ 0 }
>>   };
>> diff --git a/tools/bpf/bpftool/main.h b/tools/bpf/bpftool/main.h
>> index 6173cd9..63fdb31 100644
>> --- a/tools/bpf/bpftool/main.h
>> +++ b/tools/bpf/bpftool/main.h
>> @@ -119,6 +119,7 @@ int do_prog(int argc, char **arg);
>>   int do_map(int argc, char **arg);
>>   int do_event_pipe(int argc, char **argv);
>>   int do_cgroup(int argc, char **arg);
>> +int do_perf(int argc, char **arg);
>>   
>>   int prog_parse_fd(int *argc, char ***argv);
>>   int map_parse_fd_and_info(int *argc, char ***argv, void *info, __u32 *info_len);
>> diff --git a/tools/bpf/bpftool/perf.c b/tools/bpf/bpftool/perf.c
>> new file mode 100644
>> index 0000000..6d676e4
>> --- /dev/null
>> +++ b/tools/bpf/bpftool/perf.c
>> @@ -0,0 +1,188 @@
>> +// SPDX-License-Identifier: GPL-2.0+
>> +// Copyright (C) 2018 Facebook
>> +// Author: Yonghong Song <yhs@fb.com>
>> +
>> +#define _GNU_SOURCE
>> +#include <fcntl.h>
>> +#include <stdlib.h>
>> +#include <string.h>
>> +#include <sys/stat.h>
>> +#include <sys/types.h>
>> +#include <unistd.h>
>> +#include <ftw.h>
>> +
>> +#include <bpf.h>
>> +
>> +#include "main.h"
>> +
>> +static void print_perf_json(int pid, __u32 prog_id, __u32 prog_info,
>> +			    char *buf, __u64 probe_offset, __u64 probe_addr)
>> +{
>> +	jsonw_start_object(json_wtr);
>> +	jsonw_int_field(json_wtr, "pid", pid);
>> +	jsonw_uint_field(json_wtr, "prog_id", prog_id);
>> +	switch (prog_info) {
>> +	case BPF_PERF_INFO_TP_NAME:
>> +		jsonw_string_field(json_wtr, "prog_info", "tracepoint");
>> +		jsonw_string_field(json_wtr, "tracepoint", buf);
>> +		break;
>> +	case BPF_PERF_INFO_KPROBE:
>> +		jsonw_string_field(json_wtr, "prog_info", "kprobe");
>> +		if (buf[0] != '\0') {
>> +			jsonw_string_field(json_wtr, "func", buf);
>> +			jsonw_lluint_field(json_wtr, "offset", probe_offset);
>> +		} else {
>> +			jsonw_lluint_field(json_wtr, "addr", probe_addr);
>> +		}
>> +		break;
>> +	case BPF_PERF_INFO_KRETPROBE:
>> +		jsonw_string_field(json_wtr, "prog_info", "kretprobe");
>> +		if (buf[0] != '\0') {
>> +			jsonw_string_field(json_wtr, "func", buf);
>> +			jsonw_lluint_field(json_wtr, "offset", probe_offset);
>> +		} else {
>> +			jsonw_lluint_field(json_wtr, "addr", probe_addr);
>> +		}
>> +		break;
>> +	case BPF_PERF_INFO_UPROBE:
>> +		jsonw_string_field(json_wtr, "prog_info", "uprobe");
>> +		jsonw_string_field(json_wtr, "filename", buf);
>> +		jsonw_lluint_field(json_wtr, "offset", probe_offset);
>> +		break;
>> +	case BPF_PERF_INFO_URETPROBE:
>> +		jsonw_string_field(json_wtr, "prog_info", "uretprobe");
>> +		jsonw_string_field(json_wtr, "filename", buf);
>> +		jsonw_lluint_field(json_wtr, "offset", probe_offset);
>> +		break;
>> +	}
>> +	jsonw_end_object(json_wtr);
>> +}
>> +
>> +static void print_perf_plain(int pid, __u32 prog_id, __u32 prog_info,
>> +			    char *buf, __u64 probe_offset, __u64 probe_addr)
>> +{
>> +	printf("%d: prog_id %u ", pid, prog_id);
> 
> nit: for consistency with prog and map listings consider using double
> spaces after prog_id (i.e. between fields).  Not a hard requirement,
> though, perhaps I'm the only one who finds that more readable :)
> 
>> +	switch (prog_info) {
>> +	case BPF_PERF_INFO_TP_NAME:
>> +		printf("tracepoint %s\n", buf);
>> +		break;
>> +	case BPF_PERF_INFO_KPROBE:
>> +		if (buf[0] != '\0')
>> +			printf("kprobe func %s offset %llu\n", buf,
>> +			       probe_offset);
>> +		else
>> +			printf("kprobe addr %llu\n", probe_addr);
>> +		break;
>> +	case BPF_PERF_INFO_KRETPROBE:
>> +		if (buf[0] != '\0')
>> +			printf("kretprobe func %s offset %llu\n", buf,
>> +			       probe_offset);
>> +		else
>> +			printf("kretprobe addr %llu\n", probe_addr);
>> +		break;
>> +	case BPF_PERF_INFO_UPROBE:
>> +		printf("uprobe filename %s offset %llu\n", buf, probe_offset);
>> +		break;
>> +	case BPF_PERF_INFO_URETPROBE:
>> +		printf("uretprobe filename %s offset %llu\n", buf,
>> +		       probe_offset);
>> +		break;
>> +	}
>> +}
>> +
>> +static int show_proc(const char *fpath, const struct stat *sb,
>> +		     int tflag, struct FTW *ftwbuf)
>> +{
>> +	__u64 probe_offset, probe_addr;
>> +	__u32 prog_id, prog_info;
>> +	int err, pid = 0, fd = 0;
>> +	const char *pch;
>> +	char buf[4096];
>> +
>> +	/* prefix always /proc */
>> +	pch = fpath + 5;
>> +	if (*pch == '\0')
>> +		return 0;
>> +
>> +	/* pid should be all numbers */
>> +	pch++;
>> +	while (*pch >= '0' && *pch <= '9') {
> 
> nit: isdigit()?  strtoul() with its endptr also an option.  That said
>       the code is actually quite readable as is, so I'm not sure if it's
>       worth complicating it.
> 
>> +		pid = pid * 10 + *pch - '0';
>> +		pch++;
>> +	}
>> +	if (*pch == '\0')
>> +		return 0;
>> +	if (*pch != '/')
>> +		return FTW_SKIP_SUBTREE;
>> +
>> +	/* check /proc/<pid>/fd directory */
>> +	pch++;
>> +	if (*pch == '\0' || *pch != 'f')
>> +		return FTW_SKIP_SUBTREE;
> 
> but == '\0' implies != 'f'
> 
>> +	pch++;
>> +	if (*pch == '\0' || *pch != 'd')
>> +		return FTW_SKIP_SUBTREE;
> 
> nit: possibly just:
>       if (strncmp(pch, "fd", 2))
>            return FTW_SKIP_SUBTREE;
>       pch += 2;
> 
>> +	pch++;
>> +	if (*pch == '\0')
>> +		return 0;
>> +	if (*pch != '/')
>> +		return FTW_SKIP_SUBTREE;
>> +
>> +	/* check /proc/<pid>/fd/<fd_num> */
>> +	pch++;
>> +	while (*pch >= '0' && *pch <= '9') {
>> +		fd = fd * 10 + *pch - '0';
>> +		pch++;
>> +	}
>> +	if (*pch != '\0')
>> +		return FTW_SKIP_SUBTREE;
>> +
>> +	/* query (pid, fd) for potential perf events */
>> +	err = bpf_trace_event_query(pid, fd, buf, sizeof(buf),
>> +		&prog_id, &prog_info, &probe_offset, &probe_addr);
> 
> nit: continuation line not aligned with opening bracket
> 
>> +	if (err < 0)
>> +		return 0;
>> +
>> +	if (json_output)
>> +		print_perf_json(pid, prog_id, prog_info, buf, probe_offset,
>> +				probe_addr);
>> +	else
>> +		print_perf_plain(pid, prog_id, prog_info, buf, probe_offset,
>> +				 probe_addr);
>> +
>> +	return 0;
>> +}
>> +
>> +static int do_show(int argc, char **argv)
>> +{
>> +	int nopenfd = 16;
>> +	int flags = FTW_ACTIONRETVAL | FTW_PHYS;
> 
> nit: reverse christmas tree networking style if you don't mind
> 
>> +	if (nftw("/proc", show_proc, nopenfd, flags) == -1) {
>> +		perror("nftw");
> 
> nit: p_err("%s", strerror(errno)); would also show up in JSON output
> 
>> +		return -1;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +static int do_help(int argc, char **argv)
>> +{
>> +	fprintf(stderr,
>> +		"Usage: %s %s { show | help }\n"
>> +		"",
>> +		bin_name, argv[-2]);
>> +
>> +	return 0;
>> +}
>> +
>> +static const struct cmd cmds[] = {
>> +	{ "show",	do_show },
> 
> Other commands alias show and list, so could you add:
> 
> 	{ "list",	do_show },
> 
> and list to help output?
> 
>> +	{ "help",	do_help },
>> +	{ 0 }
>> +};
>> +
>> +int do_perf(int argc, char **argv)
>> +{
>> +	return cmd_select(cmds, argc, argv, do_help);
>> +}
> 
> Thanks a lot for adding bpftool support, and with JSON output included!
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next 2/7] bpf: introduce bpf subcommand BPF_PERF_EVENT_QUERY
  2018-05-15 23:45 ` [PATCH bpf-next 2/7] bpf: introduce bpf subcommand BPF_PERF_EVENT_QUERY Yonghong Song
@ 2018-05-16 11:27   ` Peter Zijlstra
  2018-05-16 21:59     ` Yonghong Song
  2018-05-17 23:52   ` kbuild test robot
  1 sibling, 1 reply; 15+ messages in thread
From: Peter Zijlstra @ 2018-05-16 11:27 UTC (permalink / raw)
  To: Yonghong Song; +Cc: ast, daniel, netdev, kernel-team

On Tue, May 15, 2018 at 04:45:16PM -0700, Yonghong Song wrote:
> Currently, suppose a userspace application has loaded a bpf program
> and attached it to a tracepoint/kprobe/uprobe, and a bpf
> introspection tool, e.g., bpftool, wants to show which bpf program
> is attached to which tracepoint/kprobe/uprobe. Such attachment
> information will be really useful to understand the overall bpf
> deployment in the system.
> 
> There is a name field (16 bytes) for each program, which could
> be used to encode the attachment point. There are some drawbacks
> for this approaches. First, bpftool user (e.g., an admin) may not
> really understand the association between the name and the
> attachment point. Second, if one program is attached to multiple
> places, encoding a proper name which can imply all these
> attachments becomes difficult.
> 
> This patch introduces a new bpf subcommand BPF_PERF_EVENT_QUERY.
> Given a pid and fd, if the <pid, fd> is associated with a
> tracepoint/kprobe/uprobea perf event, BPF_PERF_EVENT_QUERY will return
>    . prog_id
>    . tracepoint name, or
>    . k[ret]probe funcname + offset or kernel addr, or
>    . u[ret]probe filename + offset
> to the userspace.
> The user can use "bpftool prog" to find more information about
> bpf program itself with prog_id.
> 
> Signed-off-by: Yonghong Song <yhs@fb.com>
> ---
>  include/linux/trace_events.h |  15 ++++++
>  include/uapi/linux/bpf.h     |  25 ++++++++++
>  kernel/bpf/syscall.c         | 113 +++++++++++++++++++++++++++++++++++++++++++
>  kernel/trace/bpf_trace.c     |  53 ++++++++++++++++++++
>  kernel/trace/trace_kprobe.c  |  29 +++++++++++
>  kernel/trace/trace_uprobe.c  |  22 +++++++++
>  6 files changed, 257 insertions(+)

Why is the command called *_PERF_EVENT_* ? Are there not a lot of !perf
places to attach BPF proglets?

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next 2/7] bpf: introduce bpf subcommand BPF_PERF_EVENT_QUERY
  2018-05-16 11:27   ` Peter Zijlstra
@ 2018-05-16 21:59     ` Yonghong Song
  2018-05-17 15:32       ` Daniel Borkmann
  0 siblings, 1 reply; 15+ messages in thread
From: Yonghong Song @ 2018-05-16 21:59 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: ast, daniel, netdev, kernel-team



On 5/16/18 4:27 AM, Peter Zijlstra wrote:
> On Tue, May 15, 2018 at 04:45:16PM -0700, Yonghong Song wrote:
>> Currently, suppose a userspace application has loaded a bpf program
>> and attached it to a tracepoint/kprobe/uprobe, and a bpf
>> introspection tool, e.g., bpftool, wants to show which bpf program
>> is attached to which tracepoint/kprobe/uprobe. Such attachment
>> information will be really useful to understand the overall bpf
>> deployment in the system.
>>
>> There is a name field (16 bytes) for each program, which could
>> be used to encode the attachment point. There are some drawbacks
>> for this approaches. First, bpftool user (e.g., an admin) may not
>> really understand the association between the name and the
>> attachment point. Second, if one program is attached to multiple
>> places, encoding a proper name which can imply all these
>> attachments becomes difficult.
>>
>> This patch introduces a new bpf subcommand BPF_PERF_EVENT_QUERY.
>> Given a pid and fd, if the <pid, fd> is associated with a
>> tracepoint/kprobe/uprobea perf event, BPF_PERF_EVENT_QUERY will return
>>     . prog_id
>>     . tracepoint name, or
>>     . k[ret]probe funcname + offset or kernel addr, or
>>     . u[ret]probe filename + offset
>> to the userspace.
>> The user can use "bpftool prog" to find more information about
>> bpf program itself with prog_id.
>>
>> Signed-off-by: Yonghong Song <yhs@fb.com>
>> ---
>>   include/linux/trace_events.h |  15 ++++++
>>   include/uapi/linux/bpf.h     |  25 ++++++++++
>>   kernel/bpf/syscall.c         | 113 +++++++++++++++++++++++++++++++++++++++++++
>>   kernel/trace/bpf_trace.c     |  53 ++++++++++++++++++++
>>   kernel/trace/trace_kprobe.c  |  29 +++++++++++
>>   kernel/trace/trace_uprobe.c  |  22 +++++++++
>>   6 files changed, 257 insertions(+)
> 
> Why is the command called *_PERF_EVENT_* ? Are there not a lot of !perf
> places to attach BPF proglets?

Just gave a complete picture, the below are major places to attach
BPF programs:
    . perf based (through perf ioctl)
    . raw tracepoint based (through bpf interface)

    . netlink interface for tc, xdp, tunneling
    . setsockopt for socket filters
    . cgroup based (bpf attachment subcommand)
      mostly networking and io devices
    . some other networking socket related (sk_skb stream/parser/verdict,
      sk_msg verdict) through bpf attachment subcommand.

Currently, for cgroup based attachment, we have BPF_PROG_QUERY with 
input cgroup file descriptor. For other networking based queries, we
may need to enumerate tc filters, networking devices, open sockets, etc.
to get the attachment information.

So to have one BPF_QUERY command line may be too complex to
cover all cases.

But you are right that BPF_PERF_EVENT_QUERY name is too narrow since
it should be used for other (pid, fd) based queries as well (e.g., 
socket, or other potential uses in the future).

How about the subcommand name BPF_TASK_FD_QUERY and make 
bpf_attr.task_fd_query extensible?

Thanks!

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next 2/7] bpf: introduce bpf subcommand BPF_PERF_EVENT_QUERY
  2018-05-16 21:59     ` Yonghong Song
@ 2018-05-17 15:32       ` Daniel Borkmann
  2018-05-17 17:50         ` Yonghong Song
  0 siblings, 1 reply; 15+ messages in thread
From: Daniel Borkmann @ 2018-05-17 15:32 UTC (permalink / raw)
  To: Yonghong Song, Peter Zijlstra; +Cc: ast, netdev, kernel-team

On 05/16/2018 11:59 PM, Yonghong Song wrote:
> On 5/16/18 4:27 AM, Peter Zijlstra wrote:
>> On Tue, May 15, 2018 at 04:45:16PM -0700, Yonghong Song wrote:
>>> Currently, suppose a userspace application has loaded a bpf program
>>> and attached it to a tracepoint/kprobe/uprobe, and a bpf
>>> introspection tool, e.g., bpftool, wants to show which bpf program
>>> is attached to which tracepoint/kprobe/uprobe. Such attachment
>>> information will be really useful to understand the overall bpf
>>> deployment in the system.
>>>
>>> There is a name field (16 bytes) for each program, which could
>>> be used to encode the attachment point. There are some drawbacks
>>> for this approaches. First, bpftool user (e.g., an admin) may not
>>> really understand the association between the name and the
>>> attachment point. Second, if one program is attached to multiple
>>> places, encoding a proper name which can imply all these
>>> attachments becomes difficult.
>>>
>>> This patch introduces a new bpf subcommand BPF_PERF_EVENT_QUERY.
>>> Given a pid and fd, if the <pid, fd> is associated with a
>>> tracepoint/kprobe/uprobea perf event, BPF_PERF_EVENT_QUERY will return
>>>     . prog_id
>>>     . tracepoint name, or
>>>     . k[ret]probe funcname + offset or kernel addr, or
>>>     . u[ret]probe filename + offset
>>> to the userspace.
>>> The user can use "bpftool prog" to find more information about
>>> bpf program itself with prog_id.
>>>
>>> Signed-off-by: Yonghong Song <yhs@fb.com>
>>> ---
>>>   include/linux/trace_events.h |  15 ++++++
>>>   include/uapi/linux/bpf.h     |  25 ++++++++++
>>>   kernel/bpf/syscall.c         | 113 +++++++++++++++++++++++++++++++++++++++++++
>>>   kernel/trace/bpf_trace.c     |  53 ++++++++++++++++++++
>>>   kernel/trace/trace_kprobe.c  |  29 +++++++++++
>>>   kernel/trace/trace_uprobe.c  |  22 +++++++++
>>>   6 files changed, 257 insertions(+)
>>
>> Why is the command called *_PERF_EVENT_* ? Are there not a lot of !perf
>> places to attach BPF proglets?
> 
> Just gave a complete picture, the below are major places to attach
> BPF programs:
>    . perf based (through perf ioctl)
>    . raw tracepoint based (through bpf interface)
> 
>    . netlink interface for tc, xdp, tunneling
>    . setsockopt for socket filters
>    . cgroup based (bpf attachment subcommand)
>      mostly networking and io devices
>    . some other networking socket related (sk_skb stream/parser/verdict,
>      sk_msg verdict) through bpf attachment subcommand.
> 
> Currently, for cgroup based attachment, we have BPF_PROG_QUERY with input cgroup file descriptor. For other networking based queries, we
> may need to enumerate tc filters, networking devices, open sockets, etc.
> to get the attachment information.
> 
> So to have one BPF_QUERY command line may be too complex to
> cover all cases.
> 
> But you are right that BPF_PERF_EVENT_QUERY name is too narrow since
> it should be used for other (pid, fd) based queries as well (e.g., socket, or other potential uses in the future).
> 
> How about the subcommand name BPF_TASK_FD_QUERY and make bpf_attr.task_fd_query extensible?

I like the introspection output it provides in 7/7, it's really great!
So the query interface would only ever be tied to BPF progs whose attach
life time is tied to the life time of the application and as soon as all
refs on the fd are released it's unloaded from the system. BPF_TASK_FD_QUERY
seems okay to me, or something like BPF_ATTACH_QUERY. Even if the name is
slightly more generic, it might be more fitting with other cmds like
BPF_PROG_QUERY we have where we tell an attach point to retrieve all progs
from it (though only tied to cgroups right now, it may not be in future).

For all the others that are not strictly tied to the task but global, bpftool
would then need to be extended to query the various other interfaces like
netlink for retrieval which is on todo for some point in future as well. So
this set nicely complements this introspection aspect.

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next 2/7] bpf: introduce bpf subcommand BPF_PERF_EVENT_QUERY
  2018-05-17 15:32       ` Daniel Borkmann
@ 2018-05-17 17:50         ` Yonghong Song
  0 siblings, 0 replies; 15+ messages in thread
From: Yonghong Song @ 2018-05-17 17:50 UTC (permalink / raw)
  To: Daniel Borkmann, Peter Zijlstra; +Cc: ast, netdev, kernel-team



On 5/17/18 8:32 AM, Daniel Borkmann wrote:
> On 05/16/2018 11:59 PM, Yonghong Song wrote:
>> On 5/16/18 4:27 AM, Peter Zijlstra wrote:
>>> On Tue, May 15, 2018 at 04:45:16PM -0700, Yonghong Song wrote:
>>>> Currently, suppose a userspace application has loaded a bpf program
>>>> and attached it to a tracepoint/kprobe/uprobe, and a bpf
>>>> introspection tool, e.g., bpftool, wants to show which bpf program
>>>> is attached to which tracepoint/kprobe/uprobe. Such attachment
>>>> information will be really useful to understand the overall bpf
>>>> deployment in the system.
>>>>
>>>> There is a name field (16 bytes) for each program, which could
>>>> be used to encode the attachment point. There are some drawbacks
>>>> for this approaches. First, bpftool user (e.g., an admin) may not
>>>> really understand the association between the name and the
>>>> attachment point. Second, if one program is attached to multiple
>>>> places, encoding a proper name which can imply all these
>>>> attachments becomes difficult.
>>>>
>>>> This patch introduces a new bpf subcommand BPF_PERF_EVENT_QUERY.
>>>> Given a pid and fd, if the <pid, fd> is associated with a
>>>> tracepoint/kprobe/uprobea perf event, BPF_PERF_EVENT_QUERY will return
>>>>      . prog_id
>>>>      . tracepoint name, or
>>>>      . k[ret]probe funcname + offset or kernel addr, or
>>>>      . u[ret]probe filename + offset
>>>> to the userspace.
>>>> The user can use "bpftool prog" to find more information about
>>>> bpf program itself with prog_id.
>>>>
>>>> Signed-off-by: Yonghong Song <yhs@fb.com>
>>>> ---
>>>>    include/linux/trace_events.h |  15 ++++++
>>>>    include/uapi/linux/bpf.h     |  25 ++++++++++
>>>>    kernel/bpf/syscall.c         | 113 +++++++++++++++++++++++++++++++++++++++++++
>>>>    kernel/trace/bpf_trace.c     |  53 ++++++++++++++++++++
>>>>    kernel/trace/trace_kprobe.c  |  29 +++++++++++
>>>>    kernel/trace/trace_uprobe.c  |  22 +++++++++
>>>>    6 files changed, 257 insertions(+)
>>>
>>> Why is the command called *_PERF_EVENT_* ? Are there not a lot of !perf
>>> places to attach BPF proglets?
>>
>> Just gave a complete picture, the below are major places to attach
>> BPF programs:
>>     . perf based (through perf ioctl)
>>     . raw tracepoint based (through bpf interface)
>>
>>     . netlink interface for tc, xdp, tunneling
>>     . setsockopt for socket filters
>>     . cgroup based (bpf attachment subcommand)
>>       mostly networking and io devices
>>     . some other networking socket related (sk_skb stream/parser/verdict,
>>       sk_msg verdict) through bpf attachment subcommand.
>>
>> Currently, for cgroup based attachment, we have BPF_PROG_QUERY with input cgroup file descriptor. For other networking based queries, we
>> may need to enumerate tc filters, networking devices, open sockets, etc.
>> to get the attachment information.
>>
>> So to have one BPF_QUERY command line may be too complex to
>> cover all cases.
>>
>> But you are right that BPF_PERF_EVENT_QUERY name is too narrow since
>> it should be used for other (pid, fd) based queries as well (e.g., socket, or other potential uses in the future).
>>
>> How about the subcommand name BPF_TASK_FD_QUERY and make bpf_attr.task_fd_query extensible?
> 
> I like the introspection output it provides in 7/7, it's really great!
> So the query interface would only ever be tied to BPF progs whose attach
> life time is tied to the life time of the application and as soon as all
> refs on the fd are released it's unloaded from the system. BPF_TASK_FD_QUERY
> seems okay to me, or something like BPF_ATTACH_QUERY. Even if the name is
> slightly more generic, it might be more fitting with other cmds like
> BPF_PROG_QUERY we have where we tell an attach point to retrieve all progs
> from it (though only tied to cgroups right now, it may not be in future).

I think BPF_TASK_FD_QUERY is okay. Using BPF_ATTACH_QUERY indeed seems
a little bit broader to me as other query subcommands are possible to
query attachments with different input.

BPF_PROG_QUERY is also trying to query attachment. Currently, given a 
cgroup fd, it will query prog array attached. Sean has the patch to 
attach bpf programs to a RC device, and given a device fd, it will
query prog array attached to that device.

> 
> For all the others that are not strictly tied to the task but global, bpftool
> would then need to be extended to query the various other interfaces like
> netlink for retrieval which is on todo for some point in future as well. So
> this set nicely complements this introspection aspect.

Totally agree.
Thanks!

> 
> Thanks,
> Daniel
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next 2/7] bpf: introduce bpf subcommand BPF_PERF_EVENT_QUERY
  2018-05-15 23:45 ` [PATCH bpf-next 2/7] bpf: introduce bpf subcommand BPF_PERF_EVENT_QUERY Yonghong Song
  2018-05-16 11:27   ` Peter Zijlstra
@ 2018-05-17 23:52   ` kbuild test robot
  1 sibling, 0 replies; 15+ messages in thread
From: kbuild test robot @ 2018-05-17 23:52 UTC (permalink / raw)
  To: Yonghong Song; +Cc: kbuild-all, peterz, ast, daniel, netdev, kernel-team

[-- Attachment #1: Type: text/plain, Size: 1989 bytes --]

Hi Yonghong,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on bpf-next/master]

url:    https://github.com/0day-ci/linux/commits/Yonghong-Song/bpf-implement-BPF_PERF_EVENT_QUERY-for-perf-event-query/20180518-060508
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
config: i386-randconfig-x000-201819 (attached as .config)
compiler: gcc-7 (Debian 7.3.0-16) 7.3.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All warnings (new ones prefixed by >>):

   kernel/trace/trace_kprobe.c: In function 'bpf_get_kprobe_info':
>> kernel/trace/trace_kprobe.c:1315:17: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
      *probe_addr = (u64)tk->rp.kp.addr;
                    ^

vim +1315 kernel/trace/trace_kprobe.c

  1290	
  1291	int bpf_get_kprobe_info(struct perf_event *event, u32 *prog_info,
  1292				const char **symbol, u64 *probe_offset,
  1293				u64 *probe_addr, bool perf_type_tracepoint)
  1294	{
  1295		const char *pevent = trace_event_name(event->tp_event);
  1296		const char *group = event->tp_event->class->system;
  1297		struct trace_kprobe *tk;
  1298	
  1299		if (perf_type_tracepoint)
  1300			tk = find_trace_kprobe(pevent, group);
  1301		else
  1302			tk = event->tp_event->data;
  1303		if (!tk)
  1304			return -EINVAL;
  1305	
  1306		*prog_info = trace_kprobe_is_return(tk) ? BPF_PERF_INFO_KRETPROBE
  1307							: BPF_PERF_INFO_KPROBE;
  1308		if (tk->symbol) {
  1309			*symbol = tk->symbol;
  1310			*probe_offset = tk->rp.kp.offset;
  1311			*probe_addr = 0;
  1312		} else {
  1313			*symbol = NULL;
  1314			*probe_offset = 0;
> 1315			*probe_addr = (u64)tk->rp.kp.addr;
  1316		}
  1317		return 0;
  1318	}
  1319	#endif	/* CONFIG_PERF_EVENTS */
  1320	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 32670 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2018-05-17 23:52 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-15 23:45 [PATCH bpf-next 0/7] bpf: implement BPF_PERF_EVENT_QUERY for perf event query Yonghong Song
2018-05-15 23:45 ` [PATCH bpf-next 1/7] perf/core: add perf_get_event() to return perf_event given a struct file Yonghong Song
2018-05-15 23:45 ` [PATCH bpf-next 2/7] bpf: introduce bpf subcommand BPF_PERF_EVENT_QUERY Yonghong Song
2018-05-16 11:27   ` Peter Zijlstra
2018-05-16 21:59     ` Yonghong Song
2018-05-17 15:32       ` Daniel Borkmann
2018-05-17 17:50         ` Yonghong Song
2018-05-17 23:52   ` kbuild test robot
2018-05-15 23:45 ` [PATCH bpf-next 3/7] tools/bpf: sync kernel header bpf.h and add bpf_trace_event_query in libbpf Yonghong Song
2018-05-15 23:45 ` [PATCH bpf-next 4/7] tools/bpf: add ksym_get_addr() in trace_helpers Yonghong Song
2018-05-15 23:45 ` [PATCH bpf-next 5/7] samples/bpf: add a samples/bpf test for BPF_PERF_EVENT_QUERY Yonghong Song
2018-05-15 23:45 ` [PATCH bpf-next 6/7] tools/bpf: add two BPF_PERF_EVENT_QUERY tests in test_progs Yonghong Song
2018-05-15 23:45 ` [PATCH bpf-next 7/7] tools/bpftool: add perf subcommand Yonghong Song
2018-05-16  4:41   ` Jakub Kicinski
2018-05-16  5:54     ` Yonghong Song

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.