netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC 0/6] enable creating [k,u]probe with perf_event_open
@ 2017-11-08 18:17 Song Liu
  2017-11-08 18:17 ` [RFC] bcc: Try use new API to create " Song Liu
                   ` (7 more replies)
  0 siblings, 8 replies; 9+ messages in thread
From: Song Liu @ 2017-11-08 18:17 UTC (permalink / raw)
  To: peterz, rostedt, mingo, davem, netdev, ast, daniel; +Cc: kernel-team, Song Liu

This is to follow up the discussion over "new kprobe api" at Linux
Plumbers 2017:

https://www.linuxplumbersconf.org/2017/ocw/proposals/4808

With current kernel, user space tools can only create/destroy [k,u]probes
with a text-based API (kprobe_events and uprobe_events in tracefs). This
approach relies on user space to clean up the [k,u]probe after using them.
However, this is not easy for user space to clean up properly.

To solve this problem, we introduce a file descriptor based API.
Specifically, we extended perf_event_open to create [k,u]probe, and attach
this [k,u]probe to the file descriptor created by perf_event_open. These
[k,u]probe are associated with this file descriptor, so they are not
available in tracefs.

We reuse large portion of existing trace_kprobe and trace_uprobe code.
Currently, the file descriptor API does not support arguments as the
text-based API does. This should not be a problem, as user of the file
decriptor based API read data through other methods (bpf, etc.).

I also include a patch to to bcc, and a patch to man-page perf_even_open.
Please see the list below. A fork of bcc with this patch is also available
on github:

  https://github.com/liu-song-6/bcc/tree/new_perf_event_opn

Thanks,
Song

man-pages patch:
  perf_event_open.2: add new type PERF_TYPE_PROBE

bcc patch:
  bcc: Try use new API to create [k,u]probe with perf_event_open

kernel patches:

Song Liu (6):
  perf: Add new type PERF_TYPE_PROBE
  perf: copy new perf_event.h to tools/include/uapi
  perf: implement kprobe support to PERF_TYPE_PROBE
  perf: implement uprobe support to PERF_TYPE_PROBE
  bpf: add option for bpf_load.c to use PERF_TYPE_PROBE
  bpf: add new test test_many_kprobe

 include/linux/trace_events.h          |   2 +
 include/uapi/linux/perf_event.h       |  35 ++++++-
 kernel/events/core.c                  |  39 ++++++-
 kernel/trace/trace_event_perf.c       |  91 +++++++++++++++++
 kernel/trace/trace_kprobe.c           |  91 +++++++++++++++--
 kernel/trace/trace_probe.h            |  11 ++
 kernel/trace/trace_uprobe.c           |  90 +++++++++++++++--
 samples/bpf/Makefile                  |   3 +
 samples/bpf/bpf_load.c                |  61 ++++++-----
 samples/bpf/bpf_load.h                |  12 +++
 samples/bpf/test_many_kprobe_user.c   | 184 ++++++++++++++++++++++++++++++++++
 tools/include/uapi/linux/perf_event.h |  35 ++++++-
 12 files changed, 607 insertions(+), 47 deletions(-)
 create mode 100644 samples/bpf/test_many_kprobe_user.c

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [RFC] bcc: Try use new API to create [k,u]probe with perf_event_open
  2017-11-08 18:17 [RFC 0/6] enable creating [k,u]probe with perf_event_open Song Liu
@ 2017-11-08 18:17 ` Song Liu
  2017-11-08 18:17 ` [RFC 1/6] perf: Add new type PERF_TYPE_PROBE Song Liu
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Song Liu @ 2017-11-08 18:17 UTC (permalink / raw)
  To: peterz, rostedt, mingo, davem, netdev, ast, daniel; +Cc: kernel-team, Song Liu

New kernel API allows creating [k,u]probe with perf_event_open.
This patch tries to use the new API. If the new API doesn't work,
we fall back to old API.

bpf_detach_probe() looks up the event being removed. If the event
is not found, we skip the clean up procedure.

Signed-off-by: Song Liu <songliubraving@fb.com>
---
 src/cc/libbpf.c | 224 +++++++++++++++++++++++++++++++++++++++-----------------
 1 file changed, 155 insertions(+), 69 deletions(-)

diff --git a/src/cc/libbpf.c b/src/cc/libbpf.c
index 77413df..d7be0a9 100644
--- a/src/cc/libbpf.c
+++ b/src/cc/libbpf.c
@@ -520,38 +520,66 @@ int bpf_attach_socket(int sock, int prog) {
   return setsockopt(sock, SOL_SOCKET, SO_ATTACH_BPF, &prog, sizeof(prog));
 }
 
+/*
+ * new kernel API allows creating [k,u]probe with perf_event_open, which
+ * makes it easier to clean up the [k,u]probe. This function tries to
+ * create pfd with the new API.
+ */
+static int bpf_try_perf_event_open_with_probe(struct probe_desc *pd, int pid,
+    int cpu, int group_fd, int is_uprobe, int is_return)
+{
+  struct perf_event_attr attr = {};
+
+  attr.type = PERF_TYPE_PROBE;
+  attr.probe_desc = ptr_to_u64(pd);
+  attr.sample_type = PERF_SAMPLE_RAW | PERF_SAMPLE_CALLCHAIN;
+  attr.sample_period = 1;
+  attr.wakeup_events = 1;
+  attr.is_uprobe = is_uprobe;
+  attr.is_return = is_return;
+  return syscall(__NR_perf_event_open, &attr, pid, cpu, group_fd,
+                 PERF_FLAG_FD_CLOEXEC);
+}
+
 static int bpf_attach_tracing_event(int progfd, const char *event_path,
-    struct perf_reader *reader, int pid, int cpu, int group_fd) {
-  int efd, pfd;
+    struct perf_reader *reader, int pid, int cpu, int group_fd, int pfd) {
+  int efd;
   ssize_t bytes;
   char buf[256];
   struct perf_event_attr attr = {};
 
-  snprintf(buf, sizeof(buf), "%s/id", event_path);
-  efd = open(buf, O_RDONLY, 0);
-  if (efd < 0) {
-    fprintf(stderr, "open(%s): %s\n", buf, strerror(errno));
-    return -1;
-  }
+  /*
+   * Only look up id and call perf_event_open when
+   * bpf_try_perf_event_open_with_probe() didn't returns valid pfd.
+   */
+  if (pfd < 0) {
+    snprintf(buf, sizeof(buf), "%s/id", event_path);
+    efd = open(buf, O_RDONLY, 0);
+    if (efd < 0) {
+      fprintf(stderr, "open(%s): %s\n", buf, strerror(errno));
+      return -1;
+    }
 
-  bytes = read(efd, buf, sizeof(buf));
-  if (bytes <= 0 || bytes >= sizeof(buf)) {
-    fprintf(stderr, "read(%s): %s\n", buf, strerror(errno));
+    bytes = read(efd, buf, sizeof(buf));
+    if (bytes <= 0 || bytes >= sizeof(buf)) {
+      fprintf(stderr, "read(%s): %s\n", buf, strerror(errno));
+      close(efd);
+      return -1;
+    }
     close(efd);
-    return -1;
-  }
-  close(efd);
-  buf[bytes] = '\0';
-  attr.config = strtol(buf, NULL, 0);
-  attr.type = PERF_TYPE_TRACEPOINT;
-  attr.sample_type = PERF_SAMPLE_RAW | PERF_SAMPLE_CALLCHAIN;
-  attr.sample_period = 1;
-  attr.wakeup_events = 1;
-  pfd = syscall(__NR_perf_event_open, &attr, pid, cpu, group_fd, PERF_FLAG_FD_CLOEXEC);
-  if (pfd < 0) {
-    fprintf(stderr, "perf_event_open(%s/id): %s\n", event_path, strerror(errno));
-    return -1;
+    buf[bytes] = '\0';
+    attr.config = strtol(buf, NULL, 0);
+    attr.type = PERF_TYPE_TRACEPOINT;
+    attr.sample_type = PERF_SAMPLE_RAW | PERF_SAMPLE_CALLCHAIN;
+    attr.sample_period = 1;
+    attr.wakeup_events = 1;
+    pfd = syscall(__NR_perf_event_open, &attr, pid, cpu, group_fd, PERF_FLAG_FD_CLOEXEC);
+    if (pfd < 0) {
+      fprintf(stderr, "perf_event_open(%s/id): %s\n", event_path, strerror(errno));
+      return -1;
+    }
   }
+
   perf_reader_set_fd(reader, pfd);
 
   if (perf_reader_mmap(reader, attr.type, attr.sample_type) < 0)
@@ -579,31 +607,41 @@ void * bpf_attach_kprobe(int progfd, enum bpf_probe_attach_type attach_type, con
   char event_alias[128];
   struct perf_reader *reader = NULL;
   static char *event_type = "kprobe";
+  struct probe_desc pd;
+  int pfd;
 
   reader = perf_reader_new(cb, NULL, NULL, cb_cookie, probe_perf_reader_page_cnt);
   if (!reader)
     goto error;
 
-  snprintf(buf, sizeof(buf), "/sys/kernel/debug/tracing/%s_events", event_type);
-  kfd = open(buf, O_WRONLY | O_APPEND, 0);
-  if (kfd < 0) {
-    fprintf(stderr, "open(%s): %s\n", buf, strerror(errno));
-    goto error;
-  }
+  /* try use new API to create kprobe */
+  pd.func = ptr_to_u64((void *)fn_name);
+  pd.offset = 0;
+  pfd = bpf_try_perf_event_open_with_probe(&pd, pid, cpu, group_fd, 0,
+                                           attach_type != BPF_PROBE_ENTRY);
 
-  snprintf(event_alias, sizeof(event_alias), "%s_bcc_%d", ev_name, getpid());
-  snprintf(buf, sizeof(buf), "%c:%ss/%s %s", attach_type==BPF_PROBE_ENTRY ? 'p' : 'r',
-			event_type, event_alias, fn_name);
-  if (write(kfd, buf, strlen(buf)) < 0) {
-    if (errno == EINVAL)
-      fprintf(stderr, "check dmesg output for possible cause\n");
+  if (pfd < 0) {
+    snprintf(buf, sizeof(buf), "/sys/kernel/debug/tracing/%s_events", event_type);
+    kfd = open(buf, O_WRONLY | O_APPEND, 0);
+    if (kfd < 0) {
+      fprintf(stderr, "open(%s): %s\n", buf, strerror(errno));
+      goto error;
+    }
+
+    snprintf(event_alias, sizeof(event_alias), "%s_bcc_%d", ev_name, getpid());
+    snprintf(buf, sizeof(buf), "%c:%ss/%s %s", attach_type==BPF_PROBE_ENTRY ? 'p' : 'r',
+             event_type, event_alias, fn_name);
+    if (write(kfd, buf, strlen(buf)) < 0) {
+      if (errno == EINVAL)
+        fprintf(stderr, "check dmesg output for possible cause\n");
+      close(kfd);
+      goto error;
+    }
     close(kfd);
-    goto error;
+    snprintf(buf, sizeof(buf), "/sys/kernel/debug/tracing/events/%ss/%s", event_type, event_alias);
   }
-  close(kfd);
 
-  snprintf(buf, sizeof(buf), "/sys/kernel/debug/tracing/events/%ss/%s", event_type, event_alias);
-  if (bpf_attach_tracing_event(progfd, buf, reader, pid, cpu, group_fd) < 0)
+  if (bpf_attach_tracing_event(progfd, buf, reader, pid, cpu, group_fd, pfd) < 0)
     goto error;
 
   return reader;
@@ -685,42 +723,53 @@ void * bpf_attach_uprobe(int progfd, enum bpf_probe_attach_type attach_type, con
   struct perf_reader *reader = NULL;
   static char *event_type = "uprobe";
   int res, kfd = -1, ns_fd = -1;
+  struct probe_desc pd;
+  int pfd;
 
   reader = perf_reader_new(cb, NULL, NULL, cb_cookie, probe_perf_reader_page_cnt);
   if (!reader)
     goto error;
 
-  snprintf(buf, sizeof(buf), "/sys/kernel/debug/tracing/%s_events", event_type);
-  kfd = open(buf, O_WRONLY | O_APPEND, 0);
-  if (kfd < 0) {
-    fprintf(stderr, "open(%s): %s\n", buf, strerror(errno));
-    goto error;
-  }
+  /* try use new API to create uprobe */
+  pd.path = ptr_to_u64((void *)binary_path);
+  pd.offset = offset;
+  pfd = bpf_try_perf_event_open_with_probe(&pd, pid, cpu, group_fd, 1,
+                                           attach_type != BPF_PROBE_ENTRY);
 
-  res = snprintf(event_alias, sizeof(event_alias), "%s_bcc_%d", ev_name, getpid());
-  if (res < 0 || res >= sizeof(event_alias)) {
-    fprintf(stderr, "Event name (%s) is too long for buffer\n", ev_name);
-    goto error;
-  }
-  res = snprintf(buf, sizeof(buf), "%c:%ss/%s %s:0x%lx", attach_type==BPF_PROBE_ENTRY ? 'p' : 'r',
-			event_type, event_alias, binary_path, offset);
-  if (res < 0 || res >= sizeof(buf)) {
-    fprintf(stderr, "Event alias (%s) too long for buffer\n", event_alias);
-    goto error;
-  }
+  if (pfd < 0) {
+    snprintf(buf, sizeof(buf), "/sys/kernel/debug/tracing/%s_events", event_type);
+    kfd = open(buf, O_WRONLY | O_APPEND, 0);
+    if (kfd < 0) {
+      fprintf(stderr, "open(%s): %s\n", buf, strerror(errno));
+      goto error;
+    }
 
-  ns_fd = enter_mount_ns(pid);
-  if (write(kfd, buf, strlen(buf)) < 0) {
-    if (errno == EINVAL)
-      fprintf(stderr, "check dmesg output for possible cause\n");
-    goto error;
+    res = snprintf(event_alias, sizeof(event_alias), "%s_bcc_%d", ev_name, getpid());
+    if (res < 0 || res >= sizeof(event_alias)) {
+      fprintf(stderr, "Event name (%s) is too long for buffer\n", ev_name);
+      goto error;
+    }
+    res = snprintf(buf, sizeof(buf), "%c:%ss/%s %s:0x%lx", attach_type==BPF_PROBE_ENTRY ? 'p' : 'r',
+                   event_type, event_alias, binary_path, offset);
+    if (res < 0 || res >= sizeof(buf)) {
+      fprintf(stderr, "Event alias (%s) too long for buffer\n", event_alias);
+      goto error;
+    }
+
+    ns_fd = enter_mount_ns(pid);
+    if (write(kfd, buf, strlen(buf)) < 0) {
+      if (errno == EINVAL)
+        fprintf(stderr, "check dmesg output for possible cause\n");
+      goto error;
+    }
+    close(kfd);
+    exit_mount_ns(ns_fd);
+    ns_fd = -1;
+
+    snprintf(buf, sizeof(buf), "/sys/kernel/debug/tracing/events/%ss/%s", event_type, event_alias);
   }
-  close(kfd);
-  exit_mount_ns(ns_fd);
-  ns_fd = -1;
 
-  snprintf(buf, sizeof(buf), "/sys/kernel/debug/tracing/events/%ss/%s", event_type, event_alias);
-  if (bpf_attach_tracing_event(progfd, buf, reader, pid, cpu, group_fd) < 0)
+  if (bpf_attach_tracing_event(progfd, buf, reader, pid, cpu, group_fd, pfd) < 0)
     goto error;
 
   return reader;
@@ -735,8 +784,43 @@ error:
 
 static int bpf_detach_probe(const char *ev_name, const char *event_type)
 {
-  int kfd, res;
+  int kfd = -1, res;
   char buf[PATH_MAX];
+  int found_event = 0;
+  size_t bufsize = 0;
+  char *cptr = NULL;
+  FILE *fp;
+
+  /*
+   * For [k,u]probe created with perf_event_open (on newer kernel), it is
+   * not necessary to clean it up in [k,u]probe_events. We first look up
+   * the %s_bcc_%d line in [k,u]probe_events. If the event is not found,
+   * it is safe to skip the cleaning up process (write -:... to the file).
+   */
+  snprintf(buf, sizeof(buf), "/sys/kernel/debug/tracing/%s_events", event_type);
+  fp = fopen(buf, "r");
+  if (!fp) {
+    fprintf(stderr, "open(%s): %s\n", buf, strerror(errno));
+    goto error;
+  }
+
+  res = snprintf(buf, sizeof(buf), "%ss/%s_bcc_%d", event_type, ev_name, getpid());
+  if (res < 0 || res >= sizeof(buf)) {
+    fprintf(stderr, "snprintf(%s): %d\n", ev_name, res);
+    goto error;
+  }
+
+  while (getline(&cptr, &bufsize, fp) != -1)
+    if (strstr(cptr, buf) != NULL) {
+      found_event = 1;
+      break;
+    }
+  fclose(fp);
+  fp = NULL;
+
+  if (!found_event)
+    return 0;
+
   snprintf(buf, sizeof(buf), "/sys/kernel/debug/tracing/%s_events", event_type);
   kfd = open(buf, O_WRONLY | O_APPEND, 0);
   if (kfd < 0) {
@@ -760,6 +844,8 @@ static int bpf_detach_probe(const char *ev_name, const char *event_type)
 error:
   if (kfd >= 0)
     close(kfd);
+  if (fp)
+    fclose(fp);
   return -1;
 }
 
@@ -786,7 +872,7 @@ void * bpf_attach_tracepoint(int progfd, const char *tp_category,
 
   snprintf(buf, sizeof(buf), "/sys/kernel/debug/tracing/events/%s/%s",
            tp_category, tp_name);
-  if (bpf_attach_tracing_event(progfd, buf, reader, pid, cpu, group_fd) < 0)
+  if (bpf_attach_tracing_event(progfd, buf, reader, pid, cpu, group_fd, -1) < 0)
     goto error;
 
   return reader;
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC 1/6] perf: Add new type PERF_TYPE_PROBE
  2017-11-08 18:17 [RFC 0/6] enable creating [k,u]probe with perf_event_open Song Liu
  2017-11-08 18:17 ` [RFC] bcc: Try use new API to create " Song Liu
@ 2017-11-08 18:17 ` Song Liu
  2017-11-08 18:17 ` [RFC] perf_event_open.2: add " Song Liu
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Song Liu @ 2017-11-08 18:17 UTC (permalink / raw)
  To: peterz, rostedt, mingo, davem, netdev, ast, daniel; +Cc: kernel-team, Song Liu

A new perf type PERF_TYPE_PROBE is added to allow creating [k,u]probe
with perf_event_open. These [k,u]probe are associated with the file
decriptor created by perf_event_open, thus are easy to clean when
the file descriptor is destroyed.

Struct probe_desc and two flags, is_uprobe and is_return, are added
to describe the probe being created with perf_event_open.

Note: We use type __u64 for pointer probe_desc instead of __aligned_u64.
The reason here is to avoid changing the size of struct perf_event_attr,
and breaking new-kernel-old-utility scenario. To avoid alignment problem
with the pointer, we will (in the following patches) copy probe_desc to
__aligned_u64 before using it as pointer.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Yonghong Song <yhs@fb.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
 include/uapi/linux/perf_event.h | 35 +++++++++++++++++++++++++++++++++--
 1 file changed, 33 insertions(+), 2 deletions(-)

diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index 362493a..cc42d59 100644
--- a/include/uapi/linux/perf_event.h
+++ b/include/uapi/linux/perf_event.h
@@ -33,6 +33,7 @@ enum perf_type_id {
 	PERF_TYPE_HW_CACHE			= 3,
 	PERF_TYPE_RAW				= 4,
 	PERF_TYPE_BREAKPOINT			= 5,
+	PERF_TYPE_PROBE				= 6,
 
 	PERF_TYPE_MAX,				/* non-ABI */
 };
@@ -299,6 +300,29 @@ enum perf_event_read_format {
 #define PERF_ATTR_SIZE_VER4	104	/* add: sample_regs_intr */
 #define PERF_ATTR_SIZE_VER5	112	/* add: aux_watermark */
 
+#define MAX_PROBE_FUNC_NAME_LEN	64
+/*
+ * Describe a kprobe or uprobe for PERF_TYPE_PROBE.
+ * perf_event_attr.probe_desc will point to this structure. is_uprobe
+ * and is_return are used to differentiate different types of probe
+ * (k/u, probe/retprobe).
+ *
+ * The two unions should be used as follows:
+ * For uprobe: use path and offset;
+ * For kprobe: if func is empty, use addr
+ *             if func is not emtpy, use func and offset
+ */
+struct probe_desc {
+	union {
+		__aligned_u64	func;
+		__aligned_u64	path;
+	};
+	union {
+		__aligned_u64	addr;
+		__u64		offset;
+	};
+};
+
 /*
  * Hardware event_id to monitor via a performance monitoring event:
  *
@@ -320,7 +344,10 @@ struct perf_event_attr {
 	/*
 	 * Type specific configuration information.
 	 */
-	__u64			config;
+	union {
+		__u64		config;
+		__u64		probe_desc; /* ptr to struct probe_desc */
+	};
 
 	union {
 		__u64		sample_period;
@@ -370,7 +397,11 @@ struct perf_event_attr {
 				context_switch :  1, /* context switch data */
 				write_backward :  1, /* Write ring buffer from end to beginning */
 				namespaces     :  1, /* include namespaces data */
-				__reserved_1   : 35;
+
+				/* For PERF_TYPE_PROBE */
+				is_uprobe      :  1, /* 0: kprobe, 1: uprobe */
+				is_return      :  1, /* 0: probe, 1: retprobe */
+				__reserved_1   : 33;
 
 	union {
 		__u32		wakeup_events;	  /* wakeup every n events */
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC] perf_event_open.2: add new type PERF_TYPE_PROBE
  2017-11-08 18:17 [RFC 0/6] enable creating [k,u]probe with perf_event_open Song Liu
  2017-11-08 18:17 ` [RFC] bcc: Try use new API to create " Song Liu
  2017-11-08 18:17 ` [RFC 1/6] perf: Add new type PERF_TYPE_PROBE Song Liu
@ 2017-11-08 18:17 ` Song Liu
  2017-11-08 18:17 ` [RFC 2/6] perf: copy new perf_event.h to tools/include/uapi Song Liu
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Song Liu @ 2017-11-08 18:17 UTC (permalink / raw)
  To: peterz, rostedt, mingo, davem, netdev, ast, daniel; +Cc: kernel-team, Song Liu

A new type PERF_TYPE_PROBE is being added to perf_event_attr. This
patch adds information about this type.

Note: the following two flags are also added to the man page. They
are from perf_event.h in latest kernel repo. However, they are not
related to PERF_TYPE_PROBE. Therefore, their usage are not included
in this patch.

          write_backward :  1
          namespaces     :  1

Signed-off-by: Song Liu <songliubraving@fb.com>
---
 man2/perf_event_open.2 | 82 ++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 80 insertions(+), 2 deletions(-)

diff --git a/man2/perf_event_open.2 b/man2/perf_event_open.2
index 03dc748..a37b78b 100644
--- a/man2/perf_event_open.2
+++ b/man2/perf_event_open.2
@@ -205,7 +205,12 @@ for the event being created.
 struct perf_event_attr {
     __u32 type;                 /* Type of event */
     __u32 size;                 /* Size of attribute structure */
-    __u64 config;               /* Type-specific configuration */
+
+    /* Type-specific configuration */
+    union {
+        __u64 config;
+        __u64 probe_desc; /* ptr to struct probe_desc */
+    };
 
     union {
         __u64 sample_period;    /* Period of sampling */
@@ -244,8 +249,13 @@ struct perf_event_attr {
                                    due to exec */
           use_clockid    :  1,  /* use clockid for time fields */
           context_switch :  1,  /* context switch data */
+          write_backward :  1,  /* Write ring buffer from end to beginning */
+          namespaces     :  1,  /* include namespaces data */
 
-          __reserved_1   : 37;
+          /* For PERF_TYPE_PROBE */
+          is_uprobe      :  1,  /* 0 for kprobe, 1 for uprobe */
+          is_return      :  1,  /* 0 for [k,u]probe, 1 for [k,u]retprobe */
+          __reserved_1   : 33;
 
     union {
         __u32 wakeup_events;    /* wakeup every n events */
@@ -336,6 +346,13 @@ field.
 For instance,
 .I /sys/bus/event_source/devices/cpu/type
 contains the value for the core CPU PMU, which is usually 4.
+.TP
+.BR PERF_TYPE_PROBE " (since Linux 4.TBD)"
+This indicates a kprobe or uprobe should be created and
+attached to the file descriptor.
+See fields
+.IR probe_desc ", " is_uprobe ", and " is_return
+for more details.
 .RE
 .TP
 .I "size"
@@ -627,6 +644,67 @@ then leave
 .I config
 set to zero.
 Its parameters are set in other places.
+.PP
+If
+.I type
+is
+.BR PERF_TYPE_PROBE ,
+.I probe_desc
+is used instead of
+.IR config .
+.RE
+.TP
+.I probe_desc
+The
+.I probe_desc
+field is used with
+.I type
+of
+.BR PERF_TYPE_PROBE ,
+to save a pointer to struct probe_desc:
+.PP
+.in +8n
+.EX
+struct probe_desc {
+    union {
+        __aligned_u64 func;
+        __aligned_u64 path;
+    };
+    union {
+        __aligned_u64 addr;
+        __u64         offset;
+    };
+};
+.EE
+Different fields of struct probe_desc are used to describe kprobes
+and uprobes. For kprobes: use
+.I func
+and
+.IR offset ,
+or use
+.I addr
+and leave
+.I func
+as NULL. For uprobe: use
+.I path
+and
+.IR offset .
+.RE
+.TP
+.IR is_uprobe ", " is_return
+These two bits are used with
+.I type
+of
+.BR PERF_TYPE_PROBE ,
+to specify type of the probe:
+.PP
+.in +8n
+.EX
+is_uprobe == 0, is_return == 0: kprobe
+is_uprobe == 0, is_return == 1: kretprobe
+is_uprobe == 1, is_return == 0: uprobe
+is_uprobe == 1, is_return == 1: uretprobe
+.EE
 .RE
 .TP
 .IR sample_period ", " sample_freq
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC 2/6] perf: copy new perf_event.h to tools/include/uapi
  2017-11-08 18:17 [RFC 0/6] enable creating [k,u]probe with perf_event_open Song Liu
                   ` (2 preceding siblings ...)
  2017-11-08 18:17 ` [RFC] perf_event_open.2: add " Song Liu
@ 2017-11-08 18:17 ` Song Liu
  2017-11-08 18:17 ` [RFC 3/6] perf: implement kprobe support to PERF_TYPE_PROBE Song Liu
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Song Liu @ 2017-11-08 18:17 UTC (permalink / raw)
  To: peterz, rostedt, mingo, davem, netdev, ast, daniel; +Cc: kernel-team, Song Liu

perf_event.h is updated in previous patch, this patch applies same
changes to the tools/ version. This is part is put in a separate
patch in case the two files are back ported separately.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Yonghong Song <yhs@fb.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
 tools/include/uapi/linux/perf_event.h | 35 +++++++++++++++++++++++++++++++++--
 1 file changed, 33 insertions(+), 2 deletions(-)

diff --git a/tools/include/uapi/linux/perf_event.h b/tools/include/uapi/linux/perf_event.h
index 140ae63..4941241 100644
--- a/tools/include/uapi/linux/perf_event.h
+++ b/tools/include/uapi/linux/perf_event.h
@@ -32,6 +32,7 @@ enum perf_type_id {
 	PERF_TYPE_HW_CACHE			= 3,
 	PERF_TYPE_RAW				= 4,
 	PERF_TYPE_BREAKPOINT			= 5,
+	PERF_TYPE_PROBE				= 6,
 
 	PERF_TYPE_MAX,				/* non-ABI */
 };
@@ -298,6 +299,29 @@ enum perf_event_read_format {
 #define PERF_ATTR_SIZE_VER4	104	/* add: sample_regs_intr */
 #define PERF_ATTR_SIZE_VER5	112	/* add: aux_watermark */
 
+#define MAX_PROBE_FUNC_NAME_LEN	64
+/*
+ * Describe a kprobe or uprobe for PERF_TYPE_PROBE.
+ * perf_event_attr.probe_desc will point to this structure. is_uprobe
+ * and is_return are used to differentiate different types of probe
+ * (k/u, probe/retprobe).
+ *
+ * The two unions should be used as follows:
+ * For uprobe: use path and offset;
+ * For kprobe: if func is empty, use addr
+ *             if func is not emtpy, use func and offset
+ */
+struct probe_desc {
+	union {
+		__aligned_u64	func;
+		__aligned_u64	path;
+	};
+	union {
+		__aligned_u64	addr;
+		__u64		offset;
+	};
+};
+
 /*
  * Hardware event_id to monitor via a performance monitoring event:
  *
@@ -319,7 +343,10 @@ struct perf_event_attr {
 	/*
 	 * Type specific configuration information.
 	 */
-	__u64			config;
+	union {
+		__u64		config;
+		__u64		probe_desc; /* ptr to struct probe_desc */
+	};
 
 	union {
 		__u64		sample_period;
@@ -369,7 +396,11 @@ struct perf_event_attr {
 				context_switch :  1, /* context switch data */
 				write_backward :  1, /* Write ring buffer from end to beginning */
 				namespaces     :  1, /* include namespaces data */
-				__reserved_1   : 35;
+
+				/* For PERF_TYPE_PROBE */
+				is_uprobe      :  1, /* 0: kprobe, 1: uprobe */
+				is_return      :  1, /* 0: probe, 1: retprobe */
+				__reserved_1   : 33;
 
 	union {
 		__u32		wakeup_events;	  /* wakeup every n events */
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC 3/6] perf: implement kprobe support to PERF_TYPE_PROBE
  2017-11-08 18:17 [RFC 0/6] enable creating [k,u]probe with perf_event_open Song Liu
                   ` (3 preceding siblings ...)
  2017-11-08 18:17 ` [RFC 2/6] perf: copy new perf_event.h to tools/include/uapi Song Liu
@ 2017-11-08 18:17 ` Song Liu
  2017-11-08 18:17 ` [RFC 4/6] perf: implement uprobe " Song Liu
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Song Liu @ 2017-11-08 18:17 UTC (permalink / raw)
  To: peterz, rostedt, mingo, davem, netdev, ast, daniel; +Cc: kernel-team, Song Liu

A new pmu, perf_probe, is created for PERF_TYPE_PROBE. Based on
input from perf_event_open(), perf_probe creates a kprobe (or
kretprobe) for the perf_event. This kprobe is private to this
perf_event, and thus not added to global lists, and not
available in tracefs.

Two functions, create_local_trace_kprobe() and
destroy_local_trace_kprobe()  are added to created and destroy these
local trace_kprobe.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Yonghong Song <yhs@fb.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
---
 include/linux/trace_events.h    |  2 +
 kernel/events/core.c            | 39 +++++++++++++++++-
 kernel/trace/trace_event_perf.c | 62 ++++++++++++++++++++++++++++
 kernel/trace/trace_kprobe.c     | 91 +++++++++++++++++++++++++++++++++++++----
 kernel/trace/trace_probe.h      |  7 ++++
 5 files changed, 191 insertions(+), 10 deletions(-)

diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 84014ec..96ce715 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -528,6 +528,8 @@ extern int  perf_trace_init(struct perf_event *event);
 extern void perf_trace_destroy(struct perf_event *event);
 extern int  perf_trace_add(struct perf_event *event, int flags);
 extern void perf_trace_del(struct perf_event *event, int flags);
+extern int  perf_probe_init(struct perf_event *event);
+extern void perf_probe_destroy(struct perf_event *event);
 extern int  ftrace_profile_set_filter(struct perf_event *event, int event_id,
 				     char *filter_str);
 extern void ftrace_profile_free_filter(struct perf_event *event);
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 9660ee6..1f1f8dd 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -8051,6 +8051,28 @@ static int perf_tp_event_init(struct perf_event *event)
 	return 0;
 }
 
+static int perf_probe_event_init(struct perf_event *event)
+{
+	int err;
+
+	if (event->attr.type != PERF_TYPE_PROBE)
+		return -ENOENT;
+
+	/*
+	 * no branch sampling for probe events
+	 */
+	if (has_branch_stack(event))
+		return -EOPNOTSUPP;
+
+	err = perf_probe_init(event);
+	if (err)
+		return err;
+
+	event->destroy = perf_probe_destroy;
+
+	return 0;
+}
+
 static struct pmu perf_tracepoint = {
 	.task_ctx_nr	= perf_sw_context,
 
@@ -8062,9 +8084,20 @@ static struct pmu perf_tracepoint = {
 	.read		= perf_swevent_read,
 };
 
+static struct pmu perf_probe = {
+	.task_ctx_nr	= perf_sw_context,
+	.event_init	= perf_probe_event_init,
+	.add		= perf_trace_add,
+	.del		= perf_trace_del,
+	.start		= perf_swevent_start,
+	.stop		= perf_swevent_stop,
+	.read		= perf_swevent_read,
+};
+
 static inline void perf_tp_register(void)
 {
 	perf_pmu_register(&perf_tracepoint, "tracepoint", PERF_TYPE_TRACEPOINT);
+	perf_pmu_register(&perf_probe, "probe", PERF_TYPE_PROBE);
 }
 
 static void perf_event_free_filter(struct perf_event *event)
@@ -8147,7 +8180,8 @@ static int perf_event_set_bpf_prog(struct perf_event *event, u32 prog_fd)
 	struct bpf_prog *prog;
 	int ret;
 
-	if (event->attr.type != PERF_TYPE_TRACEPOINT)
+	if (event->attr.type != PERF_TYPE_TRACEPOINT &&
+	    event->attr.type != PERF_TYPE_PROBE)
 		return perf_event_set_bpf_handler(event, prog_fd);
 
 	is_kprobe = event->tp_event->flags & TRACE_EVENT_FL_UKPROBE;
@@ -8186,7 +8220,8 @@ static int perf_event_set_bpf_prog(struct perf_event *event, u32 prog_fd)
 
 static void perf_event_free_bpf_prog(struct perf_event *event)
 {
-	if (event->attr.type != PERF_TYPE_TRACEPOINT) {
+	if (event->attr.type != PERF_TYPE_TRACEPOINT &&
+	    event->attr.type != PERF_TYPE_PROBE) {
 		perf_event_free_bpf_handler(event);
 		return;
 	}
diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
index 13ba2d3..02f4c68 100644
--- a/kernel/trace/trace_event_perf.c
+++ b/kernel/trace/trace_event_perf.c
@@ -8,6 +8,7 @@
 #include <linux/module.h>
 #include <linux/kprobes.h>
 #include "trace.h"
+#include "trace_probe.h"
 
 static char __percpu *perf_trace_buf[PERF_NR_CONTEXTS];
 
@@ -229,6 +230,58 @@ int perf_trace_init(struct perf_event *p_event)
 	return ret;
 }
 
+int perf_probe_init(struct perf_event *p_event)
+{
+	struct probe_desc pd;
+	int ret;
+	struct trace_event_call *tp_event;
+	char *name = NULL;
+	__aligned_u64 aligned_probe_desc;
+
+	/*
+	 * attr.probe_desc may not be 64-bit aligned on 32-bit systems.
+	 * Make an aligned copy of it to before u64_to_user_ptr().
+	 */
+	memcpy(&aligned_probe_desc, &p_event->attr.probe_desc,
+	       sizeof(__aligned_u64));
+
+	if (copy_from_user(&pd, u64_to_user_ptr(aligned_probe_desc),
+			   sizeof(struct probe_desc)))
+		return -EFAULT;
+
+	if (pd.func) {
+		name = kzalloc(MAX_PROBE_FUNC_NAME_LEN, GFP_KERNEL);
+		if (!name)
+			return -ENOMEM;
+		ret = strncpy_from_user(name, u64_to_user_ptr(pd.func),
+					MAX_PROBE_FUNC_NAME_LEN);
+		if (ret < 0)
+			goto out;
+
+		if (name[0] == '\0') {
+			kfree(name);
+			name = NULL;
+		}
+	}
+
+	if (!p_event->attr.is_uprobe) {
+		tp_event = create_local_trace_kprobe(
+			name, (void *)pd.addr, pd.offset,
+			p_event->attr.is_return);
+		if (IS_ERR(tp_event)) {
+			ret = PTR_ERR(tp_event);
+			goto out;
+		}
+		ret = perf_trace_event_init(tp_event, p_event);
+		if (ret)
+			destroy_local_trace_kprobe(tp_event);
+	} else
+		ret = -EOPNOTSUPP;
+out:
+	kfree(name);
+	return ret;
+}
+
 void perf_trace_destroy(struct perf_event *p_event)
 {
 	mutex_lock(&event_mutex);
@@ -237,6 +290,15 @@ void perf_trace_destroy(struct perf_event *p_event)
 	mutex_unlock(&event_mutex);
 }
 
+void perf_probe_destroy(struct perf_event *p_event)
+{
+	perf_trace_event_close(p_event);
+	perf_trace_event_unreg(p_event);
+
+	if (!p_event->attr.is_uprobe)
+		destroy_local_trace_kprobe(p_event->tp_event);
+}
+
 int perf_trace_add(struct perf_event *p_event, int flags)
 {
 	struct trace_event_call *tp_event = p_event->tp_event;
diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index abf92e4..121a067 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -438,6 +438,14 @@ disable_trace_kprobe(struct trace_kprobe *tk, struct trace_event_file *file)
 			disable_kprobe(&tk->rp.kp);
 		wait = 1;
 	}
+
+	/*
+	 * if tk is not added to any list, it must be a local trace_kprobe
+	 * created with perf_event_open. We don't need to wait for these
+	 * trace_kprobes
+	 */
+	if (list_empty(&tk->list))
+		wait = 0;
  out:
 	if (wait) {
 		/*
@@ -1313,12 +1321,9 @@ static struct trace_event_functions kprobe_funcs = {
 	.trace		= print_kprobe_event
 };
 
-static int register_kprobe_event(struct trace_kprobe *tk)
+static inline void init_trace_event_call(struct trace_kprobe *tk,
+					 struct trace_event_call *call)
 {
-	struct trace_event_call *call = &tk->tp.call;
-	int ret;
-
-	/* Initialize trace_event_call */
 	INIT_LIST_HEAD(&call->class->fields);
 	if (trace_kprobe_is_return(tk)) {
 		call->event.funcs = &kretprobe_funcs;
@@ -1327,6 +1332,19 @@ static int register_kprobe_event(struct trace_kprobe *tk)
 		call->event.funcs = &kprobe_funcs;
 		call->class->define_fields = kprobe_event_define_fields;
 	}
+
+	call->flags = TRACE_EVENT_FL_KPROBE;
+	call->class->reg = kprobe_register;
+	call->data = tk;
+}
+
+static int register_kprobe_event(struct trace_kprobe *tk)
+{
+	struct trace_event_call *call = &tk->tp.call;
+	int ret = 0;
+
+	init_trace_event_call(tk, call);
+
 	if (set_print_fmt(&tk->tp, trace_kprobe_is_return(tk)) < 0)
 		return -ENOMEM;
 	ret = register_trace_event(&call->event);
@@ -1334,9 +1352,6 @@ static int register_kprobe_event(struct trace_kprobe *tk)
 		kfree(call->print_fmt);
 		return -ENODEV;
 	}
-	call->flags = TRACE_EVENT_FL_KPROBE;
-	call->class->reg = kprobe_register;
-	call->data = tk;
 	ret = trace_add_event_call(call);
 	if (ret) {
 		pr_info("Failed to register kprobe event: %s\n",
@@ -1358,6 +1373,66 @@ static int unregister_kprobe_event(struct trace_kprobe *tk)
 	return ret;
 }
 
+#ifdef CONFIG_PERF_EVENTS
+/* create a trace_kprobe, but don't add it to global lists */
+struct trace_event_call *
+create_local_trace_kprobe(char *func, void *addr, unsigned long offs,
+			  bool is_return)
+{
+	struct trace_kprobe *tk;
+	int ret;
+	char *event;
+
+	/*
+	 * local trace_kprobes are not added to probe_list, so they are never
+	 * searched in find_trace_kprobe(). Therefore, there is no concern of
+	 * duplicated name here.
+	 */
+	event = func ? func : "DUMMY_EVENT";
+
+	tk = alloc_trace_kprobe(KPROBE_EVENT_SYSTEM, event, (void *)addr, func,
+				offs, 0 /* maxactive */, 0 /* nargs */,
+				is_return);
+
+	if (IS_ERR(tk)) {
+		pr_info("Failed to allocate trace_probe.(%d)\n",
+			(int)PTR_ERR(tk));
+		return ERR_CAST(tk);
+	}
+
+	init_trace_event_call(tk, &tk->tp.call);
+
+	if (set_print_fmt(&tk->tp, trace_kprobe_is_return(tk)) < 0) {
+		ret = -ENOMEM;
+		goto error;
+	}
+
+	ret = __register_trace_kprobe(tk);
+	if (ret < 0)
+		goto error;
+
+	return &tk->tp.call;
+error:
+	free_trace_kprobe(tk);
+	return ERR_PTR(ret);
+}
+
+void destroy_local_trace_kprobe(struct trace_event_call *event_call)
+{
+	struct trace_kprobe *tk;
+
+	tk = container_of(event_call, struct trace_kprobe, tp.call);
+
+	if (trace_probe_is_enabled(&tk->tp)) {
+		WARN_ON(1);
+		return;
+	}
+
+	__unregister_trace_kprobe(tk);
+	free_trace_kprobe(tk);
+}
+#endif /* CONFIG_PERF_EVENTS */
+
 /* Make a tracefs interface for controlling probe points */
 static __init int init_kprobe_trace(void)
 {
diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h
index 903273c..910ae1b 100644
--- a/kernel/trace/trace_probe.h
+++ b/kernel/trace/trace_probe.h
@@ -411,3 +411,10 @@ store_trace_args(int ent_size, struct trace_probe *tp, struct pt_regs *regs,
 }
 
 extern int set_print_fmt(struct trace_probe *tp, bool is_return);
+
+#ifdef CONFIG_PERF_EVENTS
+extern struct trace_event_call *
+create_local_trace_kprobe(char *func, void *addr, unsigned long offs,
+			  bool is_return);
+extern void destroy_local_trace_kprobe(struct trace_event_call *event_call);
+#endif
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC 4/6] perf: implement uprobe support to PERF_TYPE_PROBE
  2017-11-08 18:17 [RFC 0/6] enable creating [k,u]probe with perf_event_open Song Liu
                   ` (4 preceding siblings ...)
  2017-11-08 18:17 ` [RFC 3/6] perf: implement kprobe support to PERF_TYPE_PROBE Song Liu
@ 2017-11-08 18:17 ` Song Liu
  2017-11-08 18:17 ` [RFC 5/6] bpf: add option for bpf_load.c to use PERF_TYPE_PROBE Song Liu
  2017-11-08 18:17 ` [RFC 6/6] bpf: add new test test_many_kprobe Song Liu
  7 siblings, 0 replies; 9+ messages in thread
From: Song Liu @ 2017-11-08 18:17 UTC (permalink / raw)
  To: peterz, rostedt, mingo, davem, netdev, ast, daniel; +Cc: kernel-team, Song Liu

This patch adds uprobe support to perf_probe with similar pattern
as previous patch (for kprobe).

Two functions, create_local_trace_uprobe() and
destroy_local_trace_uprobe(), are created so a uprobe can be created
and attached to the file descriptor created by perf_event_open().

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Yonghong Song <yhs@fb.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
---
 kernel/trace/trace_event_perf.c | 33 ++++++++++++++-
 kernel/trace/trace_probe.h      |  4 ++
 kernel/trace/trace_uprobe.c     | 90 ++++++++++++++++++++++++++++++++++++-----
 3 files changed, 115 insertions(+), 12 deletions(-)

diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
index 02f4c68..e1216b2 100644
--- a/kernel/trace/trace_event_perf.c
+++ b/kernel/trace/trace_event_perf.c
@@ -275,8 +275,26 @@ int perf_probe_init(struct perf_event *p_event)
 		ret = perf_trace_event_init(tp_event, p_event);
 		if (ret)
 			destroy_local_trace_kprobe(tp_event);
-	} else
-		ret = -EOPNOTSUPP;
+	} else {
+		if (!name)
+			return -EINVAL;
+		tp_event = create_local_trace_uprobe(
+			name, pd.offset, p_event->attr.is_return);
+		if (IS_ERR(tp_event)) {
+			ret = PTR_ERR(tp_event);
+			goto out;
+		}
+		/*
+		 * local trace_uprobe need to hold event_mutex to call
+		 * uprobe_buffer_enable() and uprobe_buffer_disable().
+		 * event_mutex is not required for local trace_kprobes.
+		 */
+		mutex_lock(&event_mutex);
+		ret = perf_trace_event_init(tp_event, p_event);
+		if (ret)
+			destroy_local_trace_uprobe(tp_event);
+		mutex_unlock(&event_mutex);
+	}
 out:
 	kfree(name);
 	return ret;
@@ -292,11 +310,22 @@ void perf_trace_destroy(struct perf_event *p_event)
 
 void perf_probe_destroy(struct perf_event *p_event)
 {
+	/*
+	 * local trace_uprobe need to hold event_mutex to call
+	 * uprobe_buffer_enable() and uprobe_buffer_disable().
+	 * event_mutex is not required for local trace_kprobes.
+	 */
+	if (p_event->attr.is_uprobe)
+		mutex_lock(&event_mutex);
 	perf_trace_event_close(p_event);
 	perf_trace_event_unreg(p_event);
+	if (p_event->attr.is_uprobe)
+		mutex_unlock(&event_mutex);
 
 	if (!p_event->attr.is_uprobe)
 		destroy_local_trace_kprobe(p_event->tp_event);
+	else
+		destroy_local_trace_uprobe(p_event->tp_event);
 }
 
 int perf_trace_add(struct perf_event *p_event, int flags)
diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h
index 910ae1b..86b5925 100644
--- a/kernel/trace/trace_probe.h
+++ b/kernel/trace/trace_probe.h
@@ -417,4 +417,8 @@ extern struct trace_event_call *
 create_local_trace_kprobe(char *func, void *addr, unsigned long offs,
 			  bool is_return);
 extern void destroy_local_trace_kprobe(struct trace_event_call *event_call);
+
+extern struct trace_event_call *
+create_local_trace_uprobe(char *name, unsigned long offs, bool is_return);
+extern void destroy_local_trace_uprobe(struct trace_event_call *event_call);
 #endif
diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
index 153c0e4..1aa82be 100644
--- a/kernel/trace/trace_uprobe.c
+++ b/kernel/trace/trace_uprobe.c
@@ -31,8 +31,8 @@
 #define UPROBE_EVENT_SYSTEM	"uprobes"
 
 struct uprobe_trace_entry_head {
-	struct trace_entry	ent;
-	unsigned long		vaddr[];
+	struct trace_entry      ent;
+	unsigned long           vaddr[];
 };
 
 #define SIZEOF_TRACE_ENTRY(is_return)			\
@@ -1292,16 +1292,25 @@ static struct trace_event_functions uprobe_funcs = {
 	.trace		= print_uprobe_event
 };
 
-static int register_uprobe_event(struct trace_uprobe *tu)
+static inline void init_trace_event_call(struct trace_uprobe *tu,
+					 struct trace_event_call *call)
 {
-	struct trace_event_call *call = &tu->tp.call;
-	int ret;
-
-	/* Initialize trace_event_call */
 	INIT_LIST_HEAD(&call->class->fields);
 	call->event.funcs = &uprobe_funcs;
 	call->class->define_fields = uprobe_event_define_fields;
 
+	call->flags = TRACE_EVENT_FL_UPROBE;
+	call->class->reg = trace_uprobe_register;
+	call->data = tu;
+}
+
+static int register_uprobe_event(struct trace_uprobe *tu)
+{
+	struct trace_event_call *call = &tu->tp.call;
+	int ret = 0;
+
+	init_trace_event_call(tu, call);
+
 	if (set_print_fmt(&tu->tp, is_ret_probe(tu)) < 0)
 		return -ENOMEM;
 
@@ -1311,9 +1320,6 @@ static int register_uprobe_event(struct trace_uprobe *tu)
 		return -ENODEV;
 	}
 
-	call->flags = TRACE_EVENT_FL_UPROBE;
-	call->class->reg = trace_uprobe_register;
-	call->data = tu;
 	ret = trace_add_event_call(call);
 
 	if (ret) {
@@ -1339,6 +1345,70 @@ static int unregister_uprobe_event(struct trace_uprobe *tu)
 	return 0;
 }
 
+#ifdef CONFIG_PERF_EVENTS
+struct trace_event_call *
+create_local_trace_uprobe(char *name, unsigned long offs, bool is_return)
+{
+	struct trace_uprobe *tu;
+	struct inode *inode;
+	struct path path;
+	int ret;
+
+	ret = kern_path(name, LOOKUP_FOLLOW, &path);
+	if (ret)
+		return ERR_PTR(ret);
+
+	inode = igrab(d_inode(path.dentry));
+	path_put(&path);
+
+	if (!inode || !S_ISREG(inode->i_mode)) {
+		iput(inode);
+		return ERR_PTR(-EINVAL);
+	}
+
+	/*
+	 * local trace_kprobes are not added to probe_list, so they are never
+	 * searched in find_trace_kprobe(). Therefore, there is no concern of
+	 * duplicated name "DUMMY_EVENT" here.
+	 */
+	tu = alloc_trace_uprobe(UPROBE_EVENT_SYSTEM, "DUMMY_EVENT", 0,
+				is_return);
+
+	if (IS_ERR(tu)) {
+		pr_info("Failed to allocate trace_uprobe.(%d)\n",
+			(int)PTR_ERR(tu));
+		return ERR_CAST(tu);
+	}
+
+	tu->offset = offs;
+	tu->inode = inode;
+	tu->filename = kstrdup(name, GFP_KERNEL);
+	init_trace_event_call(tu, &tu->tp.call);
+
+	if (set_print_fmt(&tu->tp, is_ret_probe(tu)) < 0) {
+		ret = -ENOMEM;
+		goto error;
+	}
+
+	return &tu->tp.call;
+error:
+	free_trace_uprobe(tu);
+	return ERR_PTR(ret);
+}
+
+void destroy_local_trace_uprobe(struct trace_event_call *event_call)
+{
+	struct trace_uprobe *tu;
+
+	tu = container_of(event_call, struct trace_uprobe, tp.call);
+
+	kfree(tu->tp.call.print_fmt);
+	tu->tp.call.print_fmt = NULL;
+
+	free_trace_uprobe(tu);
+}
+#endif /* CONFIG_PERF_EVENTS */
+
 /* Make a trace interface for controling probe points */
 static __init int init_uprobe_trace(void)
 {
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC 5/6] bpf: add option for bpf_load.c to use PERF_TYPE_PROBE
  2017-11-08 18:17 [RFC 0/6] enable creating [k,u]probe with perf_event_open Song Liu
                   ` (5 preceding siblings ...)
  2017-11-08 18:17 ` [RFC 4/6] perf: implement uprobe " Song Liu
@ 2017-11-08 18:17 ` Song Liu
  2017-11-08 18:17 ` [RFC 6/6] bpf: add new test test_many_kprobe Song Liu
  7 siblings, 0 replies; 9+ messages in thread
From: Song Liu @ 2017-11-08 18:17 UTC (permalink / raw)
  To: peterz, rostedt, mingo, davem, netdev, ast, daniel; +Cc: kernel-team, Song Liu

Function load_and_attach() is updated to be able to create kprobes
with either old text based API, or the new PERF_TYPE_PROBE API.

A global flag use_perf_type_probe is added to select between the
two APIs.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
---
 samples/bpf/bpf_load.c | 56 ++++++++++++++++++++++++++++++++------------------
 samples/bpf/bpf_load.h |  8 ++++++++
 2 files changed, 44 insertions(+), 20 deletions(-)

diff --git a/samples/bpf/bpf_load.c b/samples/bpf/bpf_load.c
index 522ca92..a47cb1c 100644
--- a/samples/bpf/bpf_load.c
+++ b/samples/bpf/bpf_load.c
@@ -8,7 +8,6 @@
 #include <errno.h>
 #include <unistd.h>
 #include <string.h>
-#include <stdbool.h>
 #include <stdlib.h>
 #include <linux/bpf.h>
 #include <linux/filter.h>
@@ -42,6 +41,7 @@ int prog_array_fd = -1;
 
 struct bpf_map_data map_data[MAX_MAPS];
 int map_data_count = 0;
+bool use_perf_type_probe = true;
 
 static int populate_prog_array(const char *event, int prog_fd)
 {
@@ -70,8 +70,9 @@ static int load_and_attach(const char *event, struct bpf_insn *prog, int size)
 	size_t insns_cnt = size / sizeof(struct bpf_insn);
 	enum bpf_prog_type prog_type;
 	char buf[256];
-	int fd, efd, err, id;
+	int fd, efd, err, id = -1;
 	struct perf_event_attr attr = {};
+	struct probe_desc pd;
 
 	attr.type = PERF_TYPE_TRACEPOINT;
 	attr.sample_type = PERF_SAMPLE_RAW;
@@ -128,7 +129,7 @@ static int load_and_attach(const char *event, struct bpf_insn *prog, int size)
 		return populate_prog_array(event, fd);
 	}
 
-	if (is_kprobe || is_kretprobe) {
+	if (!use_perf_type_probe && (is_kprobe || is_kretprobe)) {
 		if (is_kprobe)
 			event += 7;
 		else
@@ -169,27 +170,42 @@ static int load_and_attach(const char *event, struct bpf_insn *prog, int size)
 		strcat(buf, "/id");
 	}
 
-	efd = open(buf, O_RDONLY, 0);
-	if (efd < 0) {
-		printf("failed to open event %s\n", event);
-		return -1;
-	}
-
-	err = read(efd, buf, sizeof(buf));
-	if (err < 0 || err >= sizeof(buf)) {
-		printf("read from '%s' failed '%s'\n", event, strerror(errno));
-		return -1;
+	if (use_perf_type_probe && (is_kprobe || is_kretprobe)) {
+		attr.type = PERF_TYPE_PROBE;
+		pd.func = ptr_to_u64(event + strlen(is_kprobe ? "kprobe/"
+						    : "kretprobe/"));
+		pd.offset = 0;
+		attr.is_return  = !!is_kretprobe;
+		attr.probe_desc = ptr_to_u64(&pd);
+	} else {
+		efd = open(buf, O_RDONLY, 0);
+		if (efd < 0) {
+			printf("failed to open event %s\n", event);
+			return -1;
+		}
+		err = read(efd, buf, sizeof(buf));
+		if (err < 0 || err >= sizeof(buf)) {
+			printf("read from '%s' failed '%s'\n", event,
+			       strerror(errno));
+			return -1;
+		}
+		close(efd);
+		buf[err] = 0;
+		id = atoi(buf);
+		attr.config = id;
 	}
 
-	close(efd);
-
-	buf[err] = 0;
-	id = atoi(buf);
-	attr.config = id;
-
 	efd = sys_perf_event_open(&attr, -1/*pid*/, 0/*cpu*/, -1/*group_fd*/, 0);
 	if (efd < 0) {
-		printf("event %d fd %d err %s\n", id, efd, strerror(errno));
+		if (use_perf_type_probe && (is_kprobe || is_kretprobe))
+			printf("k%sprobe %s fd %d err %s\n",
+			       is_kprobe ? "" : "ret",
+			       event + strlen(is_kprobe ? "kprobe/"
+					      : "kretprobe/"),
+			       efd, strerror(errno));
+		else
+			printf("event %d fd %d err %s\n", id, efd,
+			       strerror(errno));
 		return -1;
 	}
 	event_fd[prog_cnt - 1] = efd;
diff --git a/samples/bpf/bpf_load.h b/samples/bpf/bpf_load.h
index 7d57a42..e7a8a21 100644
--- a/samples/bpf/bpf_load.h
+++ b/samples/bpf/bpf_load.h
@@ -2,6 +2,7 @@
 #ifndef __BPF_LOAD_H
 #define __BPF_LOAD_H
 
+#include <stdbool.h>
 #include "libbpf.h"
 
 #define MAX_MAPS 32
@@ -38,6 +39,8 @@ extern int map_fd[MAX_MAPS];
 extern struct bpf_map_data map_data[MAX_MAPS];
 extern int map_data_count;
 
+extern bool use_perf_type_probe;
+
 /* parses elf file compiled by llvm .c->.o
  * . parses 'maps' section and creates maps via BPF syscall
  * . parses 'license' section and passes it to syscall
@@ -59,6 +62,11 @@ struct ksym {
 	char *name;
 };
 
+static inline __u64 ptr_to_u64(const void *ptr)
+{
+	return (__u64) (unsigned long) ptr;
+}
+
 int load_kallsyms(void);
 struct ksym *ksym_search(long key);
 int set_link_xdp_fd(int ifindex, int fd, __u32 flags);
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC 6/6] bpf: add new test test_many_kprobe
  2017-11-08 18:17 [RFC 0/6] enable creating [k,u]probe with perf_event_open Song Liu
                   ` (6 preceding siblings ...)
  2017-11-08 18:17 ` [RFC 5/6] bpf: add option for bpf_load.c to use PERF_TYPE_PROBE Song Liu
@ 2017-11-08 18:17 ` Song Liu
  7 siblings, 0 replies; 9+ messages in thread
From: Song Liu @ 2017-11-08 18:17 UTC (permalink / raw)
  To: peterz, rostedt, mingo, davem, netdev, ast, daniel; +Cc: kernel-team, Song Liu

The test compares old text based kprobe API with PERF_TYPE_PROBE.

Here is a sample output of this test:

Creating 1000 kprobes with text-based API takes 6.979683 seconds
Cleaning 1000 kprobes with text-based API takes 84.897687 seconds
Creating 1000 kprobes with PERF_TYPE_PROBE (function name) takes 5.077558 seconds
Cleaning 1000 kprobes with PERF_TYPE_PROBE (function name) takes 81.241354 seconds
Creating 1000 kprobes with PERF_TYPE_PROBE (function addr) takes 5.218255 seconds
Cleaning 1000 kprobes with PERF_TYPE_PROBE (function addr) takes 80.010731 seconds

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
---
 samples/bpf/Makefile                |   3 +
 samples/bpf/bpf_load.c              |   5 +-
 samples/bpf/bpf_load.h              |   4 +
 samples/bpf/test_many_kprobe_user.c | 184 ++++++++++++++++++++++++++++++++++++
 4 files changed, 193 insertions(+), 3 deletions(-)
 create mode 100644 samples/bpf/test_many_kprobe_user.c

diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
index 3b4945c..1b729d6 100644
--- a/samples/bpf/Makefile
+++ b/samples/bpf/Makefile
@@ -44,6 +44,7 @@ hostprogs-y += xdp_redirect_map
 hostprogs-y += xdp_redirect_cpu
 hostprogs-y += xdp_monitor
 hostprogs-y += syscall_tp
+hostprogs-y += test_many_kprobe
 
 # Libbpf dependencies
 LIBBPF := ../../tools/lib/bpf/bpf.o
@@ -92,6 +93,7 @@ xdp_redirect_map-objs := bpf_load.o $(LIBBPF) xdp_redirect_map_user.o
 xdp_redirect_cpu-objs := bpf_load.o $(LIBBPF) xdp_redirect_cpu_user.o
 xdp_monitor-objs := bpf_load.o $(LIBBPF) xdp_monitor_user.o
 syscall_tp-objs := bpf_load.o $(LIBBPF) syscall_tp_user.o
+test_many_kprobe-objs := bpf_load.o $(LIBBPF) test_many_kprobe_user.o
 
 # Tell kbuild to always build the programs
 always := $(hostprogs-y)
@@ -182,6 +184,7 @@ HOSTLOADLIBES_xdp_redirect_map += -lelf
 HOSTLOADLIBES_xdp_redirect_cpu += -lelf
 HOSTLOADLIBES_xdp_monitor += -lelf
 HOSTLOADLIBES_syscall_tp += -lelf
+HOSTLOADLIBES_test_many_kprobe += -lelf
 
 # Allows pointing LLC/CLANG to a LLVM backend with bpf support, redefine on cmdline:
 #  make samples/bpf/ LLC=~/git/llvm/build/bin/llc CLANG=~/git/llvm/build/bin/clang
diff --git a/samples/bpf/bpf_load.c b/samples/bpf/bpf_load.c
index a47cb1c..590e6f0 100644
--- a/samples/bpf/bpf_load.c
+++ b/samples/bpf/bpf_load.c
@@ -639,9 +639,8 @@ void read_trace_pipe(void)
 	}
 }
 
-#define MAX_SYMS 300000
-static struct ksym syms[MAX_SYMS];
-static int sym_cnt;
+struct ksym syms[MAX_SYMS];
+int sym_cnt;
 
 static int ksym_cmp(const void *p1, const void *p2)
 {
diff --git a/samples/bpf/bpf_load.h b/samples/bpf/bpf_load.h
index e7a8a21..16bc263 100644
--- a/samples/bpf/bpf_load.h
+++ b/samples/bpf/bpf_load.h
@@ -67,6 +67,10 @@ static inline __u64 ptr_to_u64(const void *ptr)
 	return (__u64) (unsigned long) ptr;
 }
 
+#define MAX_SYMS 300000
+extern struct ksym syms[MAX_SYMS];
+extern int sym_cnt;
+
 int load_kallsyms(void);
 struct ksym *ksym_search(long key);
 int set_link_xdp_fd(int ifindex, int fd, __u32 flags);
diff --git a/samples/bpf/test_many_kprobe_user.c b/samples/bpf/test_many_kprobe_user.c
new file mode 100644
index 0000000..70b680e
--- /dev/null
+++ b/samples/bpf/test_many_kprobe_user.c
@@ -0,0 +1,184 @@
+/* Copyright (c) 2017 Facebook
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ */
+#define _GNU_SOURCE
+#include <stdio.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <string.h>
+#include <libelf.h>
+#include <gelf.h>
+#include <linux/version.h>
+#include <errno.h>
+#include <stdbool.h>
+#include <time.h>
+#include "libbpf.h"
+#include "bpf_load.h"
+#include "perf-sys.h"
+
+#define MAX_KPROBES 1000
+
+#define DEBUGFS "/sys/kernel/debug/tracing/"
+
+int kprobes[MAX_KPROBES] = {0};
+int kprobe_count;
+int perf_event_fds[MAX_KPROBES];
+const char license[] = "GPL";
+
+static __u64 time_get_ns(void)
+{
+	struct timespec ts;
+
+	clock_gettime(CLOCK_MONOTONIC, &ts);
+	return ts.tv_sec * 1000000000ull + ts.tv_nsec;
+}
+
+static int kprobe_api(char *func, void *addr, bool use_new_api)
+{
+	int efd;
+	struct perf_event_attr attr = {};
+	struct probe_desc pd;
+	char buf[256];
+	int err, id;
+
+	attr.sample_type = PERF_SAMPLE_RAW;
+	attr.sample_period = 1;
+	attr.wakeup_events = 1;
+
+	if (use_new_api) {
+		attr.type = PERF_TYPE_PROBE;
+		if (func) {
+			pd.func = ptr_to_u64(func);
+			pd.offset = 0;
+		} else {
+			pd.func = 0;
+			pd.offset = ptr_to_u64(addr);
+		}
+
+		attr.probe_desc = ptr_to_u64(&pd);
+	} else {
+		attr.type = PERF_TYPE_TRACEPOINT;
+		snprintf(buf, sizeof(buf),
+			 "echo 'p:%s %s' >> /sys/kernel/debug/tracing/kprobe_events",
+			 func, func);
+		err = system(buf);
+		if (err < 0) {
+			printf("failed to create kprobe '%s' error '%s'\n",
+			       func, strerror(errno));
+			return -1;
+		}
+
+		strcpy(buf, DEBUGFS);
+		strcat(buf, "events/kprobes/");
+		strcat(buf, func);
+		strcat(buf, "/id");
+		efd = open(buf, O_RDONLY, 0);
+		if (efd < 0) {
+			printf("failed to open event %s\n", func);
+			return -1;
+		}
+
+		err = read(efd, buf, sizeof(buf));
+		if (err < 0 || err >= sizeof(buf)) {
+			printf("read from '%s' failed '%s'\n", func,
+			       strerror(errno));
+			return -1;
+		}
+
+		close(efd);
+		buf[err] = 0;
+		id = atoi(buf);
+		attr.config = id;
+	}
+
+	efd = sys_perf_event_open(&attr, -1/*pid*/, 0/*cpu*/,
+				  -1/*group_fd*/, 0);
+
+	return efd;
+}
+
+static int select_kprobes(void)
+{
+	int fd;
+	int i;
+
+	load_kallsyms();
+
+	kprobe_count = 0;
+	for (i = 0; i < sym_cnt; i++) {
+		if (strstr(syms[i].name, "."))
+			continue;
+		fd = kprobe_api(syms[i].name, NULL, true);
+		if (fd < 0)
+			continue;
+		close(fd);
+		kprobes[kprobe_count] = i;
+		if (++kprobe_count >= MAX_KPROBES)
+			break;
+	}
+
+	return 0;
+}
+
+int main(int argc, char *argv[])
+{
+	int i;
+	__u64 start_time;
+
+	select_kprobes();
+
+	/* clean all trace_kprobe */
+	i = system("echo \"\" > /sys/kernel/debug/tracing/kprobe_events");
+
+	/* test text based API */
+	start_time = time_get_ns();
+	for (i = 0; i < kprobe_count; i++)
+		perf_event_fds[i] = kprobe_api(syms[kprobes[i]].name,
+					       NULL, false);
+	printf("Creating %d kprobes with text-based API takes %f seconds\n",
+	       kprobe_count, (time_get_ns() - start_time) / 1000000000.0);
+
+	start_time = time_get_ns();
+	for (i = 0; i < kprobe_count; i++)
+		if (perf_event_fds[i] > 0)
+			close(perf_event_fds[i]);
+	i = system("echo \"\" > /sys/kernel/debug/tracing/kprobe_events");
+	printf("Cleaning %d kprobes with text-based API takes %f seconds\n",
+	       kprobe_count, (time_get_ns() - start_time) / 1000000000.0);
+
+	/* test PERF_TYPE_PROBE API, with function names */
+	start_time = time_get_ns();
+	for (i = 0; i < kprobe_count; i++)
+		perf_event_fds[i] = kprobe_api(syms[kprobes[i]].name,
+					       NULL, true);
+	printf("Creating %d kprobes with PERF_TYPE_PROBE (function name) takes %f seconds\n",
+	       kprobe_count, (time_get_ns() - start_time) / 1000000000.0);
+
+	start_time = time_get_ns();
+	for (i = 0; i < kprobe_count; i++)
+		if (perf_event_fds[i] > 0)
+			close(perf_event_fds[i]);
+	printf("Cleaning %d kprobes with PERF_TYPE_PROBE (function name) takes %f seconds\n",
+	       kprobe_count, (time_get_ns() - start_time) / 1000000000.0);
+
+	/* test PERF_TYPE_PROBE API, with function address */
+	start_time = time_get_ns();
+	for (i = 0; i < kprobe_count; i++)
+		perf_event_fds[i] = kprobe_api(
+			NULL, (void *)(syms[kprobes[i]].addr), true);
+	printf("Creating %d kprobes with PERF_TYPE_PROBE (function addr) takes %f seconds\n",
+	       kprobe_count, (time_get_ns() - start_time) / 1000000000.0);
+
+	start_time = time_get_ns();
+	for (i = 0; i < kprobe_count; i++)
+		if (perf_event_fds[i] > 0)
+			close(perf_event_fds[i]);
+	printf("Cleaning %d kprobes with PERF_TYPE_PROBE (function addr) takes %f seconds\n",
+	       kprobe_count, (time_get_ns() - start_time) / 1000000000.0);
+	return 0;
+}
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2017-11-08 18:17 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-08 18:17 [RFC 0/6] enable creating [k,u]probe with perf_event_open Song Liu
2017-11-08 18:17 ` [RFC] bcc: Try use new API to create " Song Liu
2017-11-08 18:17 ` [RFC 1/6] perf: Add new type PERF_TYPE_PROBE Song Liu
2017-11-08 18:17 ` [RFC] perf_event_open.2: add " Song Liu
2017-11-08 18:17 ` [RFC 2/6] perf: copy new perf_event.h to tools/include/uapi Song Liu
2017-11-08 18:17 ` [RFC 3/6] perf: implement kprobe support to PERF_TYPE_PROBE Song Liu
2017-11-08 18:17 ` [RFC 4/6] perf: implement uprobe " Song Liu
2017-11-08 18:17 ` [RFC 5/6] bpf: add option for bpf_load.c to use PERF_TYPE_PROBE Song Liu
2017-11-08 18:17 ` [RFC 6/6] bpf: add new test test_many_kprobe Song Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).