All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 bpf-next 00/11] bpf, tracing: introduce bpf raw tracepoints
@ 2018-03-27  2:46 ` Alexei Starovoitov
  0 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:46 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

v5->v6:
- avoid changing semantics of for_each_kernel_tracepoint() function, instead
  introduce kernel_tracepoint_find_by_name() helper

v4->v5:
- adopted Daniel's fancy REPEAT macro in bpf_trace.c in patch 8
  
v3->v4:
- adopted Linus's CAST_TO_U64 macro to cast any integer, pointer, or small
  struct to u64. That nicely reduced the size of patch 1

v2->v3:
- with Linus's suggestion introduced generic COUNT_ARGS and CONCATENATE macros
  (or rather moved them from apparmor)
  that cleaned up patches 6 and 8
- added patch 4 to refactor trace_iwlwifi_dev_ucode_error() from 17 args to 4
  Now any tracepoint with >12 args will have build error

v1->v2:
- simplified api by combing bpf_raw_tp_open(name) + bpf_attach(prog_fd) into
  bpf_raw_tp_open(name, prog_fd) as suggested by Daniel.
  That simplifies bpf_detach as well which is now simple close() of fd.
- fixed memory leak in error path which was spotted by Daniel.
- fixed bpf_get_stackid(), bpf_perf_event_output() called from raw tracepoints
- added more tests
- fixed allyesconfig build caught by buildbot

v1:
This patch set is a different way to address the pressing need to access
task_struct pointers in sched tracepoints from bpf programs.

The first approach simply added these pointers to sched tracepoints:
https://lkml.org/lkml/2017/12/14/753
which Peter nacked.
Few options were discussed and eventually the discussion converged on
doing bpf specific tracepoint_probe_register() probe functions.
Details here:
https://lkml.org/lkml/2017/12/20/929

Patch 1 is kernel wide cleanup of pass-struct-by-value into
pass-struct-by-reference into tracepoints.

Patches 2 and 3 are minor cleanups to address allyesconfig build

Patch 4 refactor trace_iwlwifi_dev_ucode_error from 17 to 4 args

Patch 5 introduces COUNT_ARGS macro

Patch 6 minor prep work to expose number of arguments passed
into tracepoints.

Patch 7 kernel_tracepoint_find_by_name() helper

Patch 8 introduces BPF_RAW_TRACEPOINT api.
the auto-cleanup and multiple concurrent users are must have
features of tracing api. For bpf raw tracepoints it looks like:
  // load bpf prog with BPF_PROG_TYPE_RAW_TRACEPOINT type
  prog_fd = bpf_prog_load(...);

  // receive anon_inode fd for given bpf_raw_tracepoint
  // and attach bpf program to it
  raw_tp_fd = bpf_raw_tracepoint_open("xdp_exception", prog_fd);

Ctrl-C of tracing daemon or cmdline tool will automatically
detach bpf program, unload it and unregister tracepoint probe.
More details in patch 8.

Patch 9 - trivial support in libbpf
Patches 10, 11 - user space tests

samples/bpf/test_overhead performance on 1 cpu:

tracepoint    base  kprobe+bpf tracepoint+bpf raw_tracepoint+bpf
task_rename   1.1M   769K        947K            1.0M
urandom_read  789K   697K        750K            755K

Alexei Starovoitov (11):
  treewide: remove large struct-pass-by-value from tracepoint arguments
  net/mediatek: disambiguate mt76 vs mt7601u trace events
  net/mac802154: disambiguate mac80215 vs mac802154 trace events
  net/wireless/iwlwifi: fix iwlwifi_dev_ucode_error tracepoint
  macro: introduce COUNT_ARGS() macro
  tracepoint: compute num_args at build time
  tracepoint: introduce kernel_tracepoint_find_by_name
  bpf: introduce BPF_RAW_TRACEPOINT
  libbpf: add bpf_raw_tracepoint_open helper
  samples/bpf: raw tracepoint test
  selftests/bpf: test for bpf_get_stackid() from raw tracepoints

 drivers/infiniband/hw/hfi1/file_ops.c              |   2 +-
 drivers/infiniband/hw/hfi1/trace_ctxts.h           |  12 +-
 drivers/net/wireless/intel/iwlwifi/dvm/main.c      |   7 +-
 .../wireless/intel/iwlwifi/iwl-devtrace-iwlwifi.h  |  39 ++---
 drivers/net/wireless/intel/iwlwifi/iwl-devtrace.c  |   1 +
 drivers/net/wireless/intel/iwlwifi/mvm/utils.c     |   7 +-
 drivers/net/wireless/mediatek/mt7601u/trace.h      |   6 +-
 include/linux/bpf_types.h                          |   1 +
 include/linux/kernel.h                             |   7 +
 include/linux/trace_events.h                       |  37 ++++
 include/linux/tracepoint-defs.h                    |   1 +
 include/linux/tracepoint.h                         |  18 +-
 include/trace/bpf_probe.h                          |  87 ++++++++++
 include/trace/define_trace.h                       |  15 +-
 include/trace/events/f2fs.h                        |   2 +-
 include/uapi/linux/bpf.h                           |  11 ++
 kernel/bpf/syscall.c                               |  78 +++++++++
 kernel/trace/bpf_trace.c                           | 188 +++++++++++++++++++++
 kernel/tracepoint.c                                |   9 +
 net/mac802154/trace.h                              |   8 +-
 net/wireless/trace.h                               |   2 +-
 samples/bpf/Makefile                               |   1 +
 samples/bpf/bpf_load.c                             |  14 ++
 samples/bpf/test_overhead_raw_tp_kern.c            |  17 ++
 samples/bpf/test_overhead_user.c                   |  12 ++
 security/apparmor/include/path.h                   |   7 +-
 sound/firewire/amdtp-stream-trace.h                |   2 +-
 tools/include/uapi/linux/bpf.h                     |  11 ++
 tools/lib/bpf/bpf.c                                |  11 ++
 tools/lib/bpf/bpf.h                                |   1 +
 tools/testing/selftests/bpf/test_progs.c           |  91 +++++++---
 31 files changed, 615 insertions(+), 90 deletions(-)
 create mode 100644 include/trace/bpf_probe.h
 create mode 100644 samples/bpf/test_overhead_raw_tp_kern.c

-- 
2.9.5

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 00/11] bpf, tracing: introduce bpf raw tracepoints
@ 2018-03-27  2:46 ` Alexei Starovoitov
  0 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:46 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

v5->v6:
- avoid changing semantics of for_each_kernel_tracepoint() function, instead
  introduce kernel_tracepoint_find_by_name() helper

v4->v5:
- adopted Daniel's fancy REPEAT macro in bpf_trace.c in patch 8
  
v3->v4:
- adopted Linus's CAST_TO_U64 macro to cast any integer, pointer, or small
  struct to u64. That nicely reduced the size of patch 1

v2->v3:
- with Linus's suggestion introduced generic COUNT_ARGS and CONCATENATE macros
  (or rather moved them from apparmor)
  that cleaned up patches 6 and 8
- added patch 4 to refactor trace_iwlwifi_dev_ucode_error() from 17 args to 4
  Now any tracepoint with >12 args will have build error

v1->v2:
- simplified api by combing bpf_raw_tp_open(name) + bpf_attach(prog_fd) into
  bpf_raw_tp_open(name, prog_fd) as suggested by Daniel.
  That simplifies bpf_detach as well which is now simple close() of fd.
- fixed memory leak in error path which was spotted by Daniel.
- fixed bpf_get_stackid(), bpf_perf_event_output() called from raw tracepoints
- added more tests
- fixed allyesconfig build caught by buildbot

v1:
This patch set is a different way to address the pressing need to access
task_struct pointers in sched tracepoints from bpf programs.

The first approach simply added these pointers to sched tracepoints:
https://lkml.org/lkml/2017/12/14/753
which Peter nacked.
Few options were discussed and eventually the discussion converged on
doing bpf specific tracepoint_probe_register() probe functions.
Details here:
https://lkml.org/lkml/2017/12/20/929

Patch 1 is kernel wide cleanup of pass-struct-by-value into
pass-struct-by-reference into tracepoints.

Patches 2 and 3 are minor cleanups to address allyesconfig build

Patch 4 refactor trace_iwlwifi_dev_ucode_error from 17 to 4 args

Patch 5 introduces COUNT_ARGS macro

Patch 6 minor prep work to expose number of arguments passed
into tracepoints.

Patch 7 kernel_tracepoint_find_by_name() helper

Patch 8 introduces BPF_RAW_TRACEPOINT api.
the auto-cleanup and multiple concurrent users are must have
features of tracing api. For bpf raw tracepoints it looks like:
  // load bpf prog with BPF_PROG_TYPE_RAW_TRACEPOINT type
  prog_fd = bpf_prog_load(...);

  // receive anon_inode fd for given bpf_raw_tracepoint
  // and attach bpf program to it
  raw_tp_fd = bpf_raw_tracepoint_open("xdp_exception", prog_fd);

Ctrl-C of tracing daemon or cmdline tool will automatically
detach bpf program, unload it and unregister tracepoint probe.
More details in patch 8.

Patch 9 - trivial support in libbpf
Patches 10, 11 - user space tests

samples/bpf/test_overhead performance on 1 cpu:

tracepoint    base  kprobe+bpf tracepoint+bpf raw_tracepoint+bpf
task_rename   1.1M   769K        947K            1.0M
urandom_read  789K   697K        750K            755K

Alexei Starovoitov (11):
  treewide: remove large struct-pass-by-value from tracepoint arguments
  net/mediatek: disambiguate mt76 vs mt7601u trace events
  net/mac802154: disambiguate mac80215 vs mac802154 trace events
  net/wireless/iwlwifi: fix iwlwifi_dev_ucode_error tracepoint
  macro: introduce COUNT_ARGS() macro
  tracepoint: compute num_args at build time
  tracepoint: introduce kernel_tracepoint_find_by_name
  bpf: introduce BPF_RAW_TRACEPOINT
  libbpf: add bpf_raw_tracepoint_open helper
  samples/bpf: raw tracepoint test
  selftests/bpf: test for bpf_get_stackid() from raw tracepoints

 drivers/infiniband/hw/hfi1/file_ops.c              |   2 +-
 drivers/infiniband/hw/hfi1/trace_ctxts.h           |  12 +-
 drivers/net/wireless/intel/iwlwifi/dvm/main.c      |   7 +-
 .../wireless/intel/iwlwifi/iwl-devtrace-iwlwifi.h  |  39 ++---
 drivers/net/wireless/intel/iwlwifi/iwl-devtrace.c  |   1 +
 drivers/net/wireless/intel/iwlwifi/mvm/utils.c     |   7 +-
 drivers/net/wireless/mediatek/mt7601u/trace.h      |   6 +-
 include/linux/bpf_types.h                          |   1 +
 include/linux/kernel.h                             |   7 +
 include/linux/trace_events.h                       |  37 ++++
 include/linux/tracepoint-defs.h                    |   1 +
 include/linux/tracepoint.h                         |  18 +-
 include/trace/bpf_probe.h                          |  87 ++++++++++
 include/trace/define_trace.h                       |  15 +-
 include/trace/events/f2fs.h                        |   2 +-
 include/uapi/linux/bpf.h                           |  11 ++
 kernel/bpf/syscall.c                               |  78 +++++++++
 kernel/trace/bpf_trace.c                           | 188 +++++++++++++++++++++
 kernel/tracepoint.c                                |   9 +
 net/mac802154/trace.h                              |   8 +-
 net/wireless/trace.h                               |   2 +-
 samples/bpf/Makefile                               |   1 +
 samples/bpf/bpf_load.c                             |  14 ++
 samples/bpf/test_overhead_raw_tp_kern.c            |  17 ++
 samples/bpf/test_overhead_user.c                   |  12 ++
 security/apparmor/include/path.h                   |   7 +-
 sound/firewire/amdtp-stream-trace.h                |   2 +-
 tools/include/uapi/linux/bpf.h                     |  11 ++
 tools/lib/bpf/bpf.c                                |  11 ++
 tools/lib/bpf/bpf.h                                |   1 +
 tools/testing/selftests/bpf/test_progs.c           |  91 +++++++---
 31 files changed, 615 insertions(+), 90 deletions(-)
 create mode 100644 include/trace/bpf_probe.h
 create mode 100644 samples/bpf/test_overhead_raw_tp_kern.c

-- 
2.9.5

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 01/11] treewide: remove large struct-pass-by-value from tracepoint arguments
  2018-03-27  2:46 ` Alexei Starovoitov
@ 2018-03-27  2:46   ` Alexei Starovoitov
  -1 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:46 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

- fix trace_hfi1_ctxt_info() to pass large struct by reference instead of by value
- convert 'type array[]' tracepoint arguments into 'type *array',
  since compiler will warn that sizeof('type array[]') == sizeof('type *array')
  and later should be used instead

The CAST_TO_U64 macro in the later patch will enforce that tracepoint
arguments can only be integers, pointers, or less than 8 byte structures.
Larger structures should be passed by reference.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 drivers/infiniband/hw/hfi1/file_ops.c    |  2 +-
 drivers/infiniband/hw/hfi1/trace_ctxts.h | 12 ++++++------
 include/trace/events/f2fs.h              |  2 +-
 net/wireless/trace.h                     |  2 +-
 sound/firewire/amdtp-stream-trace.h      |  2 +-
 5 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c
index 41fafebe3b0d..da4aa1a95b11 100644
--- a/drivers/infiniband/hw/hfi1/file_ops.c
+++ b/drivers/infiniband/hw/hfi1/file_ops.c
@@ -1153,7 +1153,7 @@ static int get_ctxt_info(struct hfi1_filedata *fd, unsigned long arg, u32 len)
 	cinfo.sdma_ring_size = fd->cq->nentries;
 	cinfo.rcvegr_size = uctxt->egrbufs.rcvtid_size;
 
-	trace_hfi1_ctxt_info(uctxt->dd, uctxt->ctxt, fd->subctxt, cinfo);
+	trace_hfi1_ctxt_info(uctxt->dd, uctxt->ctxt, fd->subctxt, &cinfo);
 	if (copy_to_user((void __user *)arg, &cinfo, len))
 		return -EFAULT;
 
diff --git a/drivers/infiniband/hw/hfi1/trace_ctxts.h b/drivers/infiniband/hw/hfi1/trace_ctxts.h
index 4eb4cc798035..e00c8a7d559c 100644
--- a/drivers/infiniband/hw/hfi1/trace_ctxts.h
+++ b/drivers/infiniband/hw/hfi1/trace_ctxts.h
@@ -106,7 +106,7 @@ TRACE_EVENT(hfi1_uctxtdata,
 TRACE_EVENT(hfi1_ctxt_info,
 	    TP_PROTO(struct hfi1_devdata *dd, unsigned int ctxt,
 		     unsigned int subctxt,
-		     struct hfi1_ctxt_info cinfo),
+		     struct hfi1_ctxt_info *cinfo),
 	    TP_ARGS(dd, ctxt, subctxt, cinfo),
 	    TP_STRUCT__entry(DD_DEV_ENTRY(dd)
 			     __field(unsigned int, ctxt)
@@ -120,11 +120,11 @@ TRACE_EVENT(hfi1_ctxt_info,
 	    TP_fast_assign(DD_DEV_ASSIGN(dd);
 			    __entry->ctxt = ctxt;
 			    __entry->subctxt = subctxt;
-			    __entry->egrtids = cinfo.egrtids;
-			    __entry->rcvhdrq_cnt = cinfo.rcvhdrq_cnt;
-			    __entry->rcvhdrq_size = cinfo.rcvhdrq_entsize;
-			    __entry->sdma_ring_size = cinfo.sdma_ring_size;
-			    __entry->rcvegr_size = cinfo.rcvegr_size;
+			    __entry->egrtids = cinfo->egrtids;
+			    __entry->rcvhdrq_cnt = cinfo->rcvhdrq_cnt;
+			    __entry->rcvhdrq_size = cinfo->rcvhdrq_entsize;
+			    __entry->sdma_ring_size = cinfo->sdma_ring_size;
+			    __entry->rcvegr_size = cinfo->rcvegr_size;
 			    ),
 	    TP_printk("[%s] ctxt %u:%u " CINFO_FMT,
 		      __get_str(dev),
diff --git a/include/trace/events/f2fs.h b/include/trace/events/f2fs.h
index 06c87f9f720c..795698925d20 100644
--- a/include/trace/events/f2fs.h
+++ b/include/trace/events/f2fs.h
@@ -491,7 +491,7 @@ DEFINE_EVENT(f2fs__truncate_node, f2fs_truncate_node,
 
 TRACE_EVENT(f2fs_truncate_partial_nodes,
 
-	TP_PROTO(struct inode *inode, nid_t nid[], int depth, int err),
+	TP_PROTO(struct inode *inode, nid_t *nid, int depth, int err),
 
 	TP_ARGS(inode, nid, depth, err),
 
diff --git a/net/wireless/trace.h b/net/wireless/trace.h
index 5152938b358d..018c81fa72fb 100644
--- a/net/wireless/trace.h
+++ b/net/wireless/trace.h
@@ -3137,7 +3137,7 @@ TRACE_EVENT(rdev_start_radar_detection,
 
 TRACE_EVENT(rdev_set_mcast_rate,
 	TP_PROTO(struct wiphy *wiphy, struct net_device *netdev,
-		 int mcast_rate[NUM_NL80211_BANDS]),
+		 int *mcast_rate),
 	TP_ARGS(wiphy, netdev, mcast_rate),
 	TP_STRUCT__entry(
 		WIPHY_ENTRY
diff --git a/sound/firewire/amdtp-stream-trace.h b/sound/firewire/amdtp-stream-trace.h
index ea0d486652c8..54cdd4ffa9ce 100644
--- a/sound/firewire/amdtp-stream-trace.h
+++ b/sound/firewire/amdtp-stream-trace.h
@@ -14,7 +14,7 @@
 #include <linux/tracepoint.h>
 
 TRACE_EVENT(in_packet,
-	TP_PROTO(const struct amdtp_stream *s, u32 cycles, u32 cip_header[2], unsigned int payload_length, unsigned int index),
+	TP_PROTO(const struct amdtp_stream *s, u32 cycles, u32 *cip_header, unsigned int payload_length, unsigned int index),
 	TP_ARGS(s, cycles, cip_header, payload_length, index),
 	TP_STRUCT__entry(
 		__field(unsigned int, second)
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 01/11] treewide: remove large struct-pass-by-value from tracepoint arguments
@ 2018-03-27  2:46   ` Alexei Starovoitov
  0 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:46 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

- fix trace_hfi1_ctxt_info() to pass large struct by reference instead of by value
- convert 'type array[]' tracepoint arguments into 'type *array',
  since compiler will warn that sizeof('type array[]') == sizeof('type *array')
  and later should be used instead

The CAST_TO_U64 macro in the later patch will enforce that tracepoint
arguments can only be integers, pointers, or less than 8 byte structures.
Larger structures should be passed by reference.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 drivers/infiniband/hw/hfi1/file_ops.c    |  2 +-
 drivers/infiniband/hw/hfi1/trace_ctxts.h | 12 ++++++------
 include/trace/events/f2fs.h              |  2 +-
 net/wireless/trace.h                     |  2 +-
 sound/firewire/amdtp-stream-trace.h      |  2 +-
 5 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c
index 41fafebe3b0d..da4aa1a95b11 100644
--- a/drivers/infiniband/hw/hfi1/file_ops.c
+++ b/drivers/infiniband/hw/hfi1/file_ops.c
@@ -1153,7 +1153,7 @@ static int get_ctxt_info(struct hfi1_filedata *fd, unsigned long arg, u32 len)
 	cinfo.sdma_ring_size = fd->cq->nentries;
 	cinfo.rcvegr_size = uctxt->egrbufs.rcvtid_size;
 
-	trace_hfi1_ctxt_info(uctxt->dd, uctxt->ctxt, fd->subctxt, cinfo);
+	trace_hfi1_ctxt_info(uctxt->dd, uctxt->ctxt, fd->subctxt, &cinfo);
 	if (copy_to_user((void __user *)arg, &cinfo, len))
 		return -EFAULT;
 
diff --git a/drivers/infiniband/hw/hfi1/trace_ctxts.h b/drivers/infiniband/hw/hfi1/trace_ctxts.h
index 4eb4cc798035..e00c8a7d559c 100644
--- a/drivers/infiniband/hw/hfi1/trace_ctxts.h
+++ b/drivers/infiniband/hw/hfi1/trace_ctxts.h
@@ -106,7 +106,7 @@ TRACE_EVENT(hfi1_uctxtdata,
 TRACE_EVENT(hfi1_ctxt_info,
 	    TP_PROTO(struct hfi1_devdata *dd, unsigned int ctxt,
 		     unsigned int subctxt,
-		     struct hfi1_ctxt_info cinfo),
+		     struct hfi1_ctxt_info *cinfo),
 	    TP_ARGS(dd, ctxt, subctxt, cinfo),
 	    TP_STRUCT__entry(DD_DEV_ENTRY(dd)
 			     __field(unsigned int, ctxt)
@@ -120,11 +120,11 @@ TRACE_EVENT(hfi1_ctxt_info,
 	    TP_fast_assign(DD_DEV_ASSIGN(dd);
 			    __entry->ctxt = ctxt;
 			    __entry->subctxt = subctxt;
-			    __entry->egrtids = cinfo.egrtids;
-			    __entry->rcvhdrq_cnt = cinfo.rcvhdrq_cnt;
-			    __entry->rcvhdrq_size = cinfo.rcvhdrq_entsize;
-			    __entry->sdma_ring_size = cinfo.sdma_ring_size;
-			    __entry->rcvegr_size = cinfo.rcvegr_size;
+			    __entry->egrtids = cinfo->egrtids;
+			    __entry->rcvhdrq_cnt = cinfo->rcvhdrq_cnt;
+			    __entry->rcvhdrq_size = cinfo->rcvhdrq_entsize;
+			    __entry->sdma_ring_size = cinfo->sdma_ring_size;
+			    __entry->rcvegr_size = cinfo->rcvegr_size;
 			    ),
 	    TP_printk("[%s] ctxt %u:%u " CINFO_FMT,
 		      __get_str(dev),
diff --git a/include/trace/events/f2fs.h b/include/trace/events/f2fs.h
index 06c87f9f720c..795698925d20 100644
--- a/include/trace/events/f2fs.h
+++ b/include/trace/events/f2fs.h
@@ -491,7 +491,7 @@ DEFINE_EVENT(f2fs__truncate_node, f2fs_truncate_node,
 
 TRACE_EVENT(f2fs_truncate_partial_nodes,
 
-	TP_PROTO(struct inode *inode, nid_t nid[], int depth, int err),
+	TP_PROTO(struct inode *inode, nid_t *nid, int depth, int err),
 
 	TP_ARGS(inode, nid, depth, err),
 
diff --git a/net/wireless/trace.h b/net/wireless/trace.h
index 5152938b358d..018c81fa72fb 100644
--- a/net/wireless/trace.h
+++ b/net/wireless/trace.h
@@ -3137,7 +3137,7 @@ TRACE_EVENT(rdev_start_radar_detection,
 
 TRACE_EVENT(rdev_set_mcast_rate,
 	TP_PROTO(struct wiphy *wiphy, struct net_device *netdev,
-		 int mcast_rate[NUM_NL80211_BANDS]),
+		 int *mcast_rate),
 	TP_ARGS(wiphy, netdev, mcast_rate),
 	TP_STRUCT__entry(
 		WIPHY_ENTRY
diff --git a/sound/firewire/amdtp-stream-trace.h b/sound/firewire/amdtp-stream-trace.h
index ea0d486652c8..54cdd4ffa9ce 100644
--- a/sound/firewire/amdtp-stream-trace.h
+++ b/sound/firewire/amdtp-stream-trace.h
@@ -14,7 +14,7 @@
 #include <linux/tracepoint.h>
 
 TRACE_EVENT(in_packet,
-	TP_PROTO(const struct amdtp_stream *s, u32 cycles, u32 cip_header[2], unsigned int payload_length, unsigned int index),
+	TP_PROTO(const struct amdtp_stream *s, u32 cycles, u32 *cip_header, unsigned int payload_length, unsigned int index),
 	TP_ARGS(s, cycles, cip_header, payload_length, index),
 	TP_STRUCT__entry(
 		__field(unsigned int, second)
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 02/11] net/mediatek: disambiguate mt76 vs mt7601u trace events
  2018-03-27  2:46 ` Alexei Starovoitov
@ 2018-03-27  2:46   ` Alexei Starovoitov
  -1 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:46 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

two trace events defined with the same name and both unused.
They conflict in allyesconfig build. Rename one of them.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 drivers/net/wireless/mediatek/mt7601u/trace.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/wireless/mediatek/mt7601u/trace.h b/drivers/net/wireless/mediatek/mt7601u/trace.h
index 289897300ef0..82c8898b9076 100644
--- a/drivers/net/wireless/mediatek/mt7601u/trace.h
+++ b/drivers/net/wireless/mediatek/mt7601u/trace.h
@@ -34,7 +34,7 @@
 #define REG_PR_FMT	"%04x=%08x"
 #define REG_PR_ARG	__entry->reg, __entry->val
 
-DECLARE_EVENT_CLASS(dev_reg_evt,
+DECLARE_EVENT_CLASS(dev_reg_evtu,
 	TP_PROTO(struct mt7601u_dev *dev, u32 reg, u32 val),
 	TP_ARGS(dev, reg, val),
 	TP_STRUCT__entry(
@@ -51,12 +51,12 @@ DECLARE_EVENT_CLASS(dev_reg_evt,
 	)
 );
 
-DEFINE_EVENT(dev_reg_evt, reg_read,
+DEFINE_EVENT(dev_reg_evtu, reg_read,
 	TP_PROTO(struct mt7601u_dev *dev, u32 reg, u32 val),
 	TP_ARGS(dev, reg, val)
 );
 
-DEFINE_EVENT(dev_reg_evt, reg_write,
+DEFINE_EVENT(dev_reg_evtu, reg_write,
 	TP_PROTO(struct mt7601u_dev *dev, u32 reg, u32 val),
 	TP_ARGS(dev, reg, val)
 );
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 02/11] net/mediatek: disambiguate mt76 vs mt7601u trace events
@ 2018-03-27  2:46   ` Alexei Starovoitov
  0 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:46 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

two trace events defined with the same name and both unused.
They conflict in allyesconfig build. Rename one of them.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 drivers/net/wireless/mediatek/mt7601u/trace.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/wireless/mediatek/mt7601u/trace.h b/drivers/net/wireless/mediatek/mt7601u/trace.h
index 289897300ef0..82c8898b9076 100644
--- a/drivers/net/wireless/mediatek/mt7601u/trace.h
+++ b/drivers/net/wireless/mediatek/mt7601u/trace.h
@@ -34,7 +34,7 @@
 #define REG_PR_FMT	"%04x=%08x"
 #define REG_PR_ARG	__entry->reg, __entry->val
 
-DECLARE_EVENT_CLASS(dev_reg_evt,
+DECLARE_EVENT_CLASS(dev_reg_evtu,
 	TP_PROTO(struct mt7601u_dev *dev, u32 reg, u32 val),
 	TP_ARGS(dev, reg, val),
 	TP_STRUCT__entry(
@@ -51,12 +51,12 @@ DECLARE_EVENT_CLASS(dev_reg_evt,
 	)
 );
 
-DEFINE_EVENT(dev_reg_evt, reg_read,
+DEFINE_EVENT(dev_reg_evtu, reg_read,
 	TP_PROTO(struct mt7601u_dev *dev, u32 reg, u32 val),
 	TP_ARGS(dev, reg, val)
 );
 
-DEFINE_EVENT(dev_reg_evt, reg_write,
+DEFINE_EVENT(dev_reg_evtu, reg_write,
 	TP_PROTO(struct mt7601u_dev *dev, u32 reg, u32 val),
 	TP_ARGS(dev, reg, val)
 );
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 03/11] net/mac802154: disambiguate mac80215 vs mac802154 trace events
  2018-03-27  2:46 ` Alexei Starovoitov
@ 2018-03-27  2:46   ` Alexei Starovoitov
  -1 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:46 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

two trace events defined with the same name and both unused.
They conflict in allyesconfig build. Rename one of them.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 net/mac802154/trace.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/net/mac802154/trace.h b/net/mac802154/trace.h
index 2c8a43d3607f..df855c33daf2 100644
--- a/net/mac802154/trace.h
+++ b/net/mac802154/trace.h
@@ -33,7 +33,7 @@
 
 /* Tracing for driver callbacks */
 
-DECLARE_EVENT_CLASS(local_only_evt,
+DECLARE_EVENT_CLASS(local_only_evt4,
 	TP_PROTO(struct ieee802154_local *local),
 	TP_ARGS(local),
 	TP_STRUCT__entry(
@@ -45,7 +45,7 @@ DECLARE_EVENT_CLASS(local_only_evt,
 	TP_printk(LOCAL_PR_FMT, LOCAL_PR_ARG)
 );
 
-DEFINE_EVENT(local_only_evt, 802154_drv_return_void,
+DEFINE_EVENT(local_only_evt4, 802154_drv_return_void,
 	TP_PROTO(struct ieee802154_local *local),
 	TP_ARGS(local)
 );
@@ -65,12 +65,12 @@ TRACE_EVENT(802154_drv_return_int,
 		  __entry->ret)
 );
 
-DEFINE_EVENT(local_only_evt, 802154_drv_start,
+DEFINE_EVENT(local_only_evt4, 802154_drv_start,
 	TP_PROTO(struct ieee802154_local *local),
 	TP_ARGS(local)
 );
 
-DEFINE_EVENT(local_only_evt, 802154_drv_stop,
+DEFINE_EVENT(local_only_evt4, 802154_drv_stop,
 	TP_PROTO(struct ieee802154_local *local),
 	TP_ARGS(local)
 );
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 03/11] net/mac802154: disambiguate mac80215 vs mac802154 trace events
@ 2018-03-27  2:46   ` Alexei Starovoitov
  0 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:46 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

two trace events defined with the same name and both unused.
They conflict in allyesconfig build. Rename one of them.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 net/mac802154/trace.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/net/mac802154/trace.h b/net/mac802154/trace.h
index 2c8a43d3607f..df855c33daf2 100644
--- a/net/mac802154/trace.h
+++ b/net/mac802154/trace.h
@@ -33,7 +33,7 @@
 
 /* Tracing for driver callbacks */
 
-DECLARE_EVENT_CLASS(local_only_evt,
+DECLARE_EVENT_CLASS(local_only_evt4,
 	TP_PROTO(struct ieee802154_local *local),
 	TP_ARGS(local),
 	TP_STRUCT__entry(
@@ -45,7 +45,7 @@ DECLARE_EVENT_CLASS(local_only_evt,
 	TP_printk(LOCAL_PR_FMT, LOCAL_PR_ARG)
 );
 
-DEFINE_EVENT(local_only_evt, 802154_drv_return_void,
+DEFINE_EVENT(local_only_evt4, 802154_drv_return_void,
 	TP_PROTO(struct ieee802154_local *local),
 	TP_ARGS(local)
 );
@@ -65,12 +65,12 @@ TRACE_EVENT(802154_drv_return_int,
 		  __entry->ret)
 );
 
-DEFINE_EVENT(local_only_evt, 802154_drv_start,
+DEFINE_EVENT(local_only_evt4, 802154_drv_start,
 	TP_PROTO(struct ieee802154_local *local),
 	TP_ARGS(local)
 );
 
-DEFINE_EVENT(local_only_evt, 802154_drv_stop,
+DEFINE_EVENT(local_only_evt4, 802154_drv_stop,
 	TP_PROTO(struct ieee802154_local *local),
 	TP_ARGS(local)
 );
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 04/11] net/wireless/iwlwifi: fix iwlwifi_dev_ucode_error tracepoint
  2018-03-27  2:46 ` Alexei Starovoitov
@ 2018-03-27  2:46   ` Alexei Starovoitov
  -1 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:46 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

fix iwlwifi_dev_ucode_error tracepoint to pass pointer to a table
instead of all 17 arguments by value.
dvm/main.c and mvm/utils.c have 'struct iwl_error_event_table'
defined with very similar yet subtly different fields and offsets.
tracepoint is still common and using definition of 'struct iwl_error_event_table'
from dvm/commands.h while copying fields.
Long term this tracepoint probably should be split into two.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 drivers/net/wireless/intel/iwlwifi/dvm/main.c      |  7 +---
 .../wireless/intel/iwlwifi/iwl-devtrace-iwlwifi.h  | 39 ++++++++++------------
 drivers/net/wireless/intel/iwlwifi/iwl-devtrace.c  |  1 +
 drivers/net/wireless/intel/iwlwifi/mvm/utils.c     |  7 +---
 4 files changed, 21 insertions(+), 33 deletions(-)

diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/main.c b/drivers/net/wireless/intel/iwlwifi/dvm/main.c
index d11d72615de2..e68254e12764 100644
--- a/drivers/net/wireless/intel/iwlwifi/dvm/main.c
+++ b/drivers/net/wireless/intel/iwlwifi/dvm/main.c
@@ -1651,12 +1651,7 @@ static void iwl_dump_nic_error_log(struct iwl_priv *priv)
 			priv->status, table.valid);
 	}
 
-	trace_iwlwifi_dev_ucode_error(trans->dev, table.error_id, table.tsf_low,
-				      table.data1, table.data2, table.line,
-				      table.blink2, table.ilink1, table.ilink2,
-				      table.bcon_time, table.gp1, table.gp2,
-				      table.gp3, table.ucode_ver, table.hw_ver,
-				      0, table.brd_ver);
+	trace_iwlwifi_dev_ucode_error(trans->dev, &table, 0, table.brd_ver);
 	IWL_ERR(priv, "0x%08X | %-28s\n", table.error_id,
 		desc_lookup(table.error_id));
 	IWL_ERR(priv, "0x%08X | uPc\n", table.pc);
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-devtrace-iwlwifi.h b/drivers/net/wireless/intel/iwlwifi/iwl-devtrace-iwlwifi.h
index 9518a82f44c2..27e3e4e96aa2 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-devtrace-iwlwifi.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-devtrace-iwlwifi.h
@@ -126,14 +126,11 @@ TRACE_EVENT(iwlwifi_dev_tx,
 		  __entry->framelen, __entry->skbaddr)
 );
 
+struct iwl_error_event_table;
 TRACE_EVENT(iwlwifi_dev_ucode_error,
-	TP_PROTO(const struct device *dev, u32 desc, u32 tsf_low,
-		 u32 data1, u32 data2, u32 line, u32 blink2, u32 ilink1,
-		 u32 ilink2, u32 bcon_time, u32 gp1, u32 gp2, u32 rev_type,
-		 u32 major, u32 minor, u32 hw_ver, u32 brd_ver),
-	TP_ARGS(dev, desc, tsf_low, data1, data2, line,
-		 blink2, ilink1, ilink2, bcon_time, gp1, gp2,
-		 rev_type, major, minor, hw_ver, brd_ver),
+	TP_PROTO(const struct device *dev, const struct iwl_error_event_table *table,
+		 u32 hw_ver, u32 brd_ver),
+	TP_ARGS(dev, table, hw_ver, brd_ver),
 	TP_STRUCT__entry(
 		DEV_ENTRY
 		__field(u32, desc)
@@ -155,20 +152,20 @@ TRACE_EVENT(iwlwifi_dev_ucode_error,
 	),
 	TP_fast_assign(
 		DEV_ASSIGN;
-		__entry->desc = desc;
-		__entry->tsf_low = tsf_low;
-		__entry->data1 = data1;
-		__entry->data2 = data2;
-		__entry->line = line;
-		__entry->blink2 = blink2;
-		__entry->ilink1 = ilink1;
-		__entry->ilink2 = ilink2;
-		__entry->bcon_time = bcon_time;
-		__entry->gp1 = gp1;
-		__entry->gp2 = gp2;
-		__entry->rev_type = rev_type;
-		__entry->major = major;
-		__entry->minor = minor;
+		__entry->desc = table->error_id;
+		__entry->tsf_low = table->tsf_low;
+		__entry->data1 = table->data1;
+		__entry->data2 = table->data2;
+		__entry->line = table->line;
+		__entry->blink2 = table->blink2;
+		__entry->ilink1 = table->ilink1;
+		__entry->ilink2 = table->ilink2;
+		__entry->bcon_time = table->bcon_time;
+		__entry->gp1 = table->gp1;
+		__entry->gp2 = table->gp2;
+		__entry->rev_type = table->gp3;
+		__entry->major = table->ucode_ver;
+		__entry->minor = table->hw_ver;
 		__entry->hw_ver = hw_ver;
 		__entry->brd_ver = brd_ver;
 	),
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-devtrace.c b/drivers/net/wireless/intel/iwlwifi/iwl-devtrace.c
index 50510fb6ab8c..6aa719865a58 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-devtrace.c
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-devtrace.c
@@ -30,6 +30,7 @@
 #ifndef __CHECKER__
 #include "iwl-trans.h"
 
+#include "dvm/commands.h"
 #define CREATE_TRACE_POINTS
 #include "iwl-devtrace.h"
 
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
index d65e1db7c097..5442ead876eb 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
@@ -549,12 +549,7 @@ static void iwl_mvm_dump_lmac_error_log(struct iwl_mvm *mvm, u32 base)
 
 	IWL_ERR(mvm, "Loaded firmware version: %s\n", mvm->fw->fw_version);
 
-	trace_iwlwifi_dev_ucode_error(trans->dev, table.error_id, table.tsf_low,
-				      table.data1, table.data2, table.data3,
-				      table.blink2, table.ilink1,
-				      table.ilink2, table.bcon_time, table.gp1,
-				      table.gp2, table.fw_rev_type, table.major,
-				      table.minor, table.hw_ver, table.brd_ver);
+	trace_iwlwifi_dev_ucode_error(trans->dev, &table, table.hw_ver, table.brd_ver);
 	IWL_ERR(mvm, "0x%08X | %-28s\n", table.error_id,
 		desc_lookup(table.error_id));
 	IWL_ERR(mvm, "0x%08X | trm_hw_status0\n", table.trm_hw_status0);
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 04/11] net/wireless/iwlwifi: fix iwlwifi_dev_ucode_error tracepoint
@ 2018-03-27  2:46   ` Alexei Starovoitov
  0 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:46 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

fix iwlwifi_dev_ucode_error tracepoint to pass pointer to a table
instead of all 17 arguments by value.
dvm/main.c and mvm/utils.c have 'struct iwl_error_event_table'
defined with very similar yet subtly different fields and offsets.
tracepoint is still common and using definition of 'struct iwl_error_event_table'
from dvm/commands.h while copying fields.
Long term this tracepoint probably should be split into two.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 drivers/net/wireless/intel/iwlwifi/dvm/main.c      |  7 +---
 .../wireless/intel/iwlwifi/iwl-devtrace-iwlwifi.h  | 39 ++++++++++------------
 drivers/net/wireless/intel/iwlwifi/iwl-devtrace.c  |  1 +
 drivers/net/wireless/intel/iwlwifi/mvm/utils.c     |  7 +---
 4 files changed, 21 insertions(+), 33 deletions(-)

diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/main.c b/drivers/net/wireless/intel/iwlwifi/dvm/main.c
index d11d72615de2..e68254e12764 100644
--- a/drivers/net/wireless/intel/iwlwifi/dvm/main.c
+++ b/drivers/net/wireless/intel/iwlwifi/dvm/main.c
@@ -1651,12 +1651,7 @@ static void iwl_dump_nic_error_log(struct iwl_priv *priv)
 			priv->status, table.valid);
 	}
 
-	trace_iwlwifi_dev_ucode_error(trans->dev, table.error_id, table.tsf_low,
-				      table.data1, table.data2, table.line,
-				      table.blink2, table.ilink1, table.ilink2,
-				      table.bcon_time, table.gp1, table.gp2,
-				      table.gp3, table.ucode_ver, table.hw_ver,
-				      0, table.brd_ver);
+	trace_iwlwifi_dev_ucode_error(trans->dev, &table, 0, table.brd_ver);
 	IWL_ERR(priv, "0x%08X | %-28s\n", table.error_id,
 		desc_lookup(table.error_id));
 	IWL_ERR(priv, "0x%08X | uPc\n", table.pc);
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-devtrace-iwlwifi.h b/drivers/net/wireless/intel/iwlwifi/iwl-devtrace-iwlwifi.h
index 9518a82f44c2..27e3e4e96aa2 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-devtrace-iwlwifi.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-devtrace-iwlwifi.h
@@ -126,14 +126,11 @@ TRACE_EVENT(iwlwifi_dev_tx,
 		  __entry->framelen, __entry->skbaddr)
 );
 
+struct iwl_error_event_table;
 TRACE_EVENT(iwlwifi_dev_ucode_error,
-	TP_PROTO(const struct device *dev, u32 desc, u32 tsf_low,
-		 u32 data1, u32 data2, u32 line, u32 blink2, u32 ilink1,
-		 u32 ilink2, u32 bcon_time, u32 gp1, u32 gp2, u32 rev_type,
-		 u32 major, u32 minor, u32 hw_ver, u32 brd_ver),
-	TP_ARGS(dev, desc, tsf_low, data1, data2, line,
-		 blink2, ilink1, ilink2, bcon_time, gp1, gp2,
-		 rev_type, major, minor, hw_ver, brd_ver),
+	TP_PROTO(const struct device *dev, const struct iwl_error_event_table *table,
+		 u32 hw_ver, u32 brd_ver),
+	TP_ARGS(dev, table, hw_ver, brd_ver),
 	TP_STRUCT__entry(
 		DEV_ENTRY
 		__field(u32, desc)
@@ -155,20 +152,20 @@ TRACE_EVENT(iwlwifi_dev_ucode_error,
 	),
 	TP_fast_assign(
 		DEV_ASSIGN;
-		__entry->desc = desc;
-		__entry->tsf_low = tsf_low;
-		__entry->data1 = data1;
-		__entry->data2 = data2;
-		__entry->line = line;
-		__entry->blink2 = blink2;
-		__entry->ilink1 = ilink1;
-		__entry->ilink2 = ilink2;
-		__entry->bcon_time = bcon_time;
-		__entry->gp1 = gp1;
-		__entry->gp2 = gp2;
-		__entry->rev_type = rev_type;
-		__entry->major = major;
-		__entry->minor = minor;
+		__entry->desc = table->error_id;
+		__entry->tsf_low = table->tsf_low;
+		__entry->data1 = table->data1;
+		__entry->data2 = table->data2;
+		__entry->line = table->line;
+		__entry->blink2 = table->blink2;
+		__entry->ilink1 = table->ilink1;
+		__entry->ilink2 = table->ilink2;
+		__entry->bcon_time = table->bcon_time;
+		__entry->gp1 = table->gp1;
+		__entry->gp2 = table->gp2;
+		__entry->rev_type = table->gp3;
+		__entry->major = table->ucode_ver;
+		__entry->minor = table->hw_ver;
 		__entry->hw_ver = hw_ver;
 		__entry->brd_ver = brd_ver;
 	),
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-devtrace.c b/drivers/net/wireless/intel/iwlwifi/iwl-devtrace.c
index 50510fb6ab8c..6aa719865a58 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-devtrace.c
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-devtrace.c
@@ -30,6 +30,7 @@
 #ifndef __CHECKER__
 #include "iwl-trans.h"
 
+#include "dvm/commands.h"
 #define CREATE_TRACE_POINTS
 #include "iwl-devtrace.h"
 
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
index d65e1db7c097..5442ead876eb 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
@@ -549,12 +549,7 @@ static void iwl_mvm_dump_lmac_error_log(struct iwl_mvm *mvm, u32 base)
 
 	IWL_ERR(mvm, "Loaded firmware version: %s\n", mvm->fw->fw_version);
 
-	trace_iwlwifi_dev_ucode_error(trans->dev, table.error_id, table.tsf_low,
-				      table.data1, table.data2, table.data3,
-				      table.blink2, table.ilink1,
-				      table.ilink2, table.bcon_time, table.gp1,
-				      table.gp2, table.fw_rev_type, table.major,
-				      table.minor, table.hw_ver, table.brd_ver);
+	trace_iwlwifi_dev_ucode_error(trans->dev, &table, table.hw_ver, table.brd_ver);
 	IWL_ERR(mvm, "0x%08X | %-28s\n", table.error_id,
 		desc_lookup(table.error_id));
 	IWL_ERR(mvm, "0x%08X | trm_hw_status0\n", table.trm_hw_status0);
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 05/11] macro: introduce COUNT_ARGS() macro
  2018-03-27  2:46 ` Alexei Starovoitov
@ 2018-03-27  2:47   ` Alexei Starovoitov
  -1 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:47 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

move COUNT_ARGS() macro from apparmor to generic header and extend it
to count till twelve.

COUNT() was an alternative name for this logic, but it's used for
different purpose in many other places.

Similarly for CONCATENATE() macro.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 include/linux/kernel.h           | 7 +++++++
 security/apparmor/include/path.h | 7 +------
 2 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index 3fd291503576..293fa0677fba 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -919,6 +919,13 @@ static inline void ftrace_dump(enum ftrace_dump_mode oops_dump_mode) { }
 #define swap(a, b) \
 	do { typeof(a) __tmp = (a); (a) = (b); (b) = __tmp; } while (0)
 
+/* This counts to 12. Any more, it will return 13th argument. */
+#define __COUNT_ARGS(_0, _1, _2, _3, _4, _5, _6, _7, _8, _9, _10, _11, _12, _n, X...) _n
+#define COUNT_ARGS(X...) __COUNT_ARGS(, ##X, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0)
+
+#define __CONCAT(a, b) a ## b
+#define CONCATENATE(a, b) __CONCAT(a, b)
+
 /**
  * container_of - cast a member of a structure out to the containing structure
  * @ptr:	the pointer to the member.
diff --git a/security/apparmor/include/path.h b/security/apparmor/include/path.h
index 05fb3305671e..e042b994f2b8 100644
--- a/security/apparmor/include/path.h
+++ b/security/apparmor/include/path.h
@@ -43,15 +43,10 @@ struct aa_buffers {
 
 DECLARE_PER_CPU(struct aa_buffers, aa_buffers);
 
-#define COUNT_ARGS(X...) COUNT_ARGS_HELPER(, ##X, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0)
-#define COUNT_ARGS_HELPER(_0, _1, _2, _3, _4, _5, _6, _7, _8, _9, n, X...) n
-#define CONCAT(X, Y) X ## Y
-#define CONCAT_AFTER(X, Y) CONCAT(X, Y)
-
 #define ASSIGN(FN, X, N) ((X) = FN(N))
 #define EVAL1(FN, X) ASSIGN(FN, X, 0) /*X = FN(0)*/
 #define EVAL2(FN, X, Y...) do { ASSIGN(FN, X, 1);  EVAL1(FN, Y); } while (0)
-#define EVAL(FN, X...) CONCAT_AFTER(EVAL, COUNT_ARGS(X))(FN, X)
+#define EVAL(FN, X...) CONCATENATE(EVAL, COUNT_ARGS(X))(FN, X)
 
 #define for_each_cpu_buffer(I) for ((I) = 0; (I) < MAX_PATH_BUFFERS; (I)++)
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 05/11] macro: introduce COUNT_ARGS() macro
@ 2018-03-27  2:47   ` Alexei Starovoitov
  0 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:47 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

move COUNT_ARGS() macro from apparmor to generic header and extend it
to count till twelve.

COUNT() was an alternative name for this logic, but it's used for
different purpose in many other places.

Similarly for CONCATENATE() macro.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 include/linux/kernel.h           | 7 +++++++
 security/apparmor/include/path.h | 7 +------
 2 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index 3fd291503576..293fa0677fba 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -919,6 +919,13 @@ static inline void ftrace_dump(enum ftrace_dump_mode oops_dump_mode) { }
 #define swap(a, b) \
 	do { typeof(a) __tmp = (a); (a) = (b); (b) = __tmp; } while (0)
 
+/* This counts to 12. Any more, it will return 13th argument. */
+#define __COUNT_ARGS(_0, _1, _2, _3, _4, _5, _6, _7, _8, _9, _10, _11, _12, _n, X...) _n
+#define COUNT_ARGS(X...) __COUNT_ARGS(, ##X, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0)
+
+#define __CONCAT(a, b) a ## b
+#define CONCATENATE(a, b) __CONCAT(a, b)
+
 /**
  * container_of - cast a member of a structure out to the containing structure
  * @ptr:	the pointer to the member.
diff --git a/security/apparmor/include/path.h b/security/apparmor/include/path.h
index 05fb3305671e..e042b994f2b8 100644
--- a/security/apparmor/include/path.h
+++ b/security/apparmor/include/path.h
@@ -43,15 +43,10 @@ struct aa_buffers {
 
 DECLARE_PER_CPU(struct aa_buffers, aa_buffers);
 
-#define COUNT_ARGS(X...) COUNT_ARGS_HELPER(, ##X, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0)
-#define COUNT_ARGS_HELPER(_0, _1, _2, _3, _4, _5, _6, _7, _8, _9, n, X...) n
-#define CONCAT(X, Y) X ## Y
-#define CONCAT_AFTER(X, Y) CONCAT(X, Y)
-
 #define ASSIGN(FN, X, N) ((X) = FN(N))
 #define EVAL1(FN, X) ASSIGN(FN, X, 0) /*X = FN(0)*/
 #define EVAL2(FN, X, Y...) do { ASSIGN(FN, X, 1);  EVAL1(FN, Y); } while (0)
-#define EVAL(FN, X...) CONCAT_AFTER(EVAL, COUNT_ARGS(X))(FN, X)
+#define EVAL(FN, X...) CONCATENATE(EVAL, COUNT_ARGS(X))(FN, X)
 
 #define for_each_cpu_buffer(I) for ((I) = 0; (I) < MAX_PATH_BUFFERS; (I)++)
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 06/11] tracepoint: compute num_args at build time
  2018-03-27  2:46 ` Alexei Starovoitov
@ 2018-03-27  2:47   ` Alexei Starovoitov
  -1 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:47 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

compute number of arguments passed into tracepoint
at compile time and store it as part of 'struct tracepoint'.
The number is necessary to check safety of bpf program access that
is coming in subsequent patch.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 include/linux/tracepoint-defs.h |  1 +
 include/linux/tracepoint.h      | 12 ++++++------
 include/trace/define_trace.h    | 14 +++++++-------
 3 files changed, 14 insertions(+), 13 deletions(-)

diff --git a/include/linux/tracepoint-defs.h b/include/linux/tracepoint-defs.h
index 64ed7064f1fa..39a283c61c51 100644
--- a/include/linux/tracepoint-defs.h
+++ b/include/linux/tracepoint-defs.h
@@ -33,6 +33,7 @@ struct tracepoint {
 	int (*regfunc)(void);
 	void (*unregfunc)(void);
 	struct tracepoint_func __rcu *funcs;
+	u32 num_args;
 };
 
 #endif
diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index c94f466d57ef..c92f4adbc0d7 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -230,18 +230,18 @@ extern void syscall_unregfunc(void);
  * structures, so we create an array of pointers that will be used for iteration
  * on the tracepoints.
  */
-#define DEFINE_TRACE_FN(name, reg, unreg)				 \
+#define DEFINE_TRACE_FN(name, reg, unreg, num_args)			 \
 	static const char __tpstrtab_##name[]				 \
 	__attribute__((section("__tracepoints_strings"))) = #name;	 \
 	struct tracepoint __tracepoint_##name				 \
 	__attribute__((section("__tracepoints"))) =			 \
-		{ __tpstrtab_##name, STATIC_KEY_INIT_FALSE, reg, unreg, NULL };\
+		{ __tpstrtab_##name, STATIC_KEY_INIT_FALSE, reg, unreg, NULL, num_args };\
 	static struct tracepoint * const __tracepoint_ptr_##name __used	 \
 	__attribute__((section("__tracepoints_ptrs"))) =		 \
 		&__tracepoint_##name;
 
-#define DEFINE_TRACE(name)						\
-	DEFINE_TRACE_FN(name, NULL, NULL);
+#define DEFINE_TRACE(name, num_args)					\
+	DEFINE_TRACE_FN(name, NULL, NULL, num_args);
 
 #define EXPORT_TRACEPOINT_SYMBOL_GPL(name)				\
 	EXPORT_SYMBOL_GPL(__tracepoint_##name)
@@ -275,8 +275,8 @@ extern void syscall_unregfunc(void);
 		return false;						\
 	}
 
-#define DEFINE_TRACE_FN(name, reg, unreg)
-#define DEFINE_TRACE(name)
+#define DEFINE_TRACE_FN(name, reg, unreg, num_args)
+#define DEFINE_TRACE(name, num_args)
 #define EXPORT_TRACEPOINT_SYMBOL_GPL(name)
 #define EXPORT_TRACEPOINT_SYMBOL(name)
 
diff --git a/include/trace/define_trace.h b/include/trace/define_trace.h
index d9e3d4aa3f6e..96b22ace9ae7 100644
--- a/include/trace/define_trace.h
+++ b/include/trace/define_trace.h
@@ -25,7 +25,7 @@
 
 #undef TRACE_EVENT
 #define TRACE_EVENT(name, proto, args, tstruct, assign, print)	\
-	DEFINE_TRACE(name)
+	DEFINE_TRACE(name, COUNT_ARGS(args))
 
 #undef TRACE_EVENT_CONDITION
 #define TRACE_EVENT_CONDITION(name, proto, args, cond, tstruct, assign, print) \
@@ -39,24 +39,24 @@
 #undef TRACE_EVENT_FN
 #define TRACE_EVENT_FN(name, proto, args, tstruct,		\
 		assign, print, reg, unreg)			\
-	DEFINE_TRACE_FN(name, reg, unreg)
+	DEFINE_TRACE_FN(name, reg, unreg, COUNT_ARGS(args))
 
 #undef TRACE_EVENT_FN_COND
 #define TRACE_EVENT_FN_COND(name, proto, args, cond, tstruct,		\
 		assign, print, reg, unreg)			\
-	DEFINE_TRACE_FN(name, reg, unreg)
+	DEFINE_TRACE_FN(name, reg, unreg, COUNT_ARGS(args))
 
 #undef DEFINE_EVENT
 #define DEFINE_EVENT(template, name, proto, args) \
-	DEFINE_TRACE(name)
+	DEFINE_TRACE(name, COUNT_ARGS(args))
 
 #undef DEFINE_EVENT_FN
 #define DEFINE_EVENT_FN(template, name, proto, args, reg, unreg) \
-	DEFINE_TRACE_FN(name, reg, unreg)
+	DEFINE_TRACE_FN(name, reg, unreg, COUNT_ARGS(args))
 
 #undef DEFINE_EVENT_PRINT
 #define DEFINE_EVENT_PRINT(template, name, proto, args, print)	\
-	DEFINE_TRACE(name)
+	DEFINE_TRACE(name, COUNT_ARGS(args))
 
 #undef DEFINE_EVENT_CONDITION
 #define DEFINE_EVENT_CONDITION(template, name, proto, args, cond) \
@@ -64,7 +64,7 @@
 
 #undef DECLARE_TRACE
 #define DECLARE_TRACE(name, proto, args)	\
-	DEFINE_TRACE(name)
+	DEFINE_TRACE(name, COUNT_ARGS(args))
 
 #undef TRACE_INCLUDE
 #undef __TRACE_INCLUDE
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 06/11] tracepoint: compute num_args at build time
@ 2018-03-27  2:47   ` Alexei Starovoitov
  0 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:47 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

compute number of arguments passed into tracepoint
at compile time and store it as part of 'struct tracepoint'.
The number is necessary to check safety of bpf program access that
is coming in subsequent patch.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 include/linux/tracepoint-defs.h |  1 +
 include/linux/tracepoint.h      | 12 ++++++------
 include/trace/define_trace.h    | 14 +++++++-------
 3 files changed, 14 insertions(+), 13 deletions(-)

diff --git a/include/linux/tracepoint-defs.h b/include/linux/tracepoint-defs.h
index 64ed7064f1fa..39a283c61c51 100644
--- a/include/linux/tracepoint-defs.h
+++ b/include/linux/tracepoint-defs.h
@@ -33,6 +33,7 @@ struct tracepoint {
 	int (*regfunc)(void);
 	void (*unregfunc)(void);
 	struct tracepoint_func __rcu *funcs;
+	u32 num_args;
 };
 
 #endif
diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index c94f466d57ef..c92f4adbc0d7 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -230,18 +230,18 @@ extern void syscall_unregfunc(void);
  * structures, so we create an array of pointers that will be used for iteration
  * on the tracepoints.
  */
-#define DEFINE_TRACE_FN(name, reg, unreg)				 \
+#define DEFINE_TRACE_FN(name, reg, unreg, num_args)			 \
 	static const char __tpstrtab_##name[]				 \
 	__attribute__((section("__tracepoints_strings"))) = #name;	 \
 	struct tracepoint __tracepoint_##name				 \
 	__attribute__((section("__tracepoints"))) =			 \
-		{ __tpstrtab_##name, STATIC_KEY_INIT_FALSE, reg, unreg, NULL };\
+		{ __tpstrtab_##name, STATIC_KEY_INIT_FALSE, reg, unreg, NULL, num_args };\
 	static struct tracepoint * const __tracepoint_ptr_##name __used	 \
 	__attribute__((section("__tracepoints_ptrs"))) =		 \
 		&__tracepoint_##name;
 
-#define DEFINE_TRACE(name)						\
-	DEFINE_TRACE_FN(name, NULL, NULL);
+#define DEFINE_TRACE(name, num_args)					\
+	DEFINE_TRACE_FN(name, NULL, NULL, num_args);
 
 #define EXPORT_TRACEPOINT_SYMBOL_GPL(name)				\
 	EXPORT_SYMBOL_GPL(__tracepoint_##name)
@@ -275,8 +275,8 @@ extern void syscall_unregfunc(void);
 		return false;						\
 	}
 
-#define DEFINE_TRACE_FN(name, reg, unreg)
-#define DEFINE_TRACE(name)
+#define DEFINE_TRACE_FN(name, reg, unreg, num_args)
+#define DEFINE_TRACE(name, num_args)
 #define EXPORT_TRACEPOINT_SYMBOL_GPL(name)
 #define EXPORT_TRACEPOINT_SYMBOL(name)
 
diff --git a/include/trace/define_trace.h b/include/trace/define_trace.h
index d9e3d4aa3f6e..96b22ace9ae7 100644
--- a/include/trace/define_trace.h
+++ b/include/trace/define_trace.h
@@ -25,7 +25,7 @@
 
 #undef TRACE_EVENT
 #define TRACE_EVENT(name, proto, args, tstruct, assign, print)	\
-	DEFINE_TRACE(name)
+	DEFINE_TRACE(name, COUNT_ARGS(args))
 
 #undef TRACE_EVENT_CONDITION
 #define TRACE_EVENT_CONDITION(name, proto, args, cond, tstruct, assign, print) \
@@ -39,24 +39,24 @@
 #undef TRACE_EVENT_FN
 #define TRACE_EVENT_FN(name, proto, args, tstruct,		\
 		assign, print, reg, unreg)			\
-	DEFINE_TRACE_FN(name, reg, unreg)
+	DEFINE_TRACE_FN(name, reg, unreg, COUNT_ARGS(args))
 
 #undef TRACE_EVENT_FN_COND
 #define TRACE_EVENT_FN_COND(name, proto, args, cond, tstruct,		\
 		assign, print, reg, unreg)			\
-	DEFINE_TRACE_FN(name, reg, unreg)
+	DEFINE_TRACE_FN(name, reg, unreg, COUNT_ARGS(args))
 
 #undef DEFINE_EVENT
 #define DEFINE_EVENT(template, name, proto, args) \
-	DEFINE_TRACE(name)
+	DEFINE_TRACE(name, COUNT_ARGS(args))
 
 #undef DEFINE_EVENT_FN
 #define DEFINE_EVENT_FN(template, name, proto, args, reg, unreg) \
-	DEFINE_TRACE_FN(name, reg, unreg)
+	DEFINE_TRACE_FN(name, reg, unreg, COUNT_ARGS(args))
 
 #undef DEFINE_EVENT_PRINT
 #define DEFINE_EVENT_PRINT(template, name, proto, args, print)	\
-	DEFINE_TRACE(name)
+	DEFINE_TRACE(name, COUNT_ARGS(args))
 
 #undef DEFINE_EVENT_CONDITION
 #define DEFINE_EVENT_CONDITION(template, name, proto, args, cond) \
@@ -64,7 +64,7 @@
 
 #undef DECLARE_TRACE
 #define DECLARE_TRACE(name, proto, args)	\
-	DEFINE_TRACE(name)
+	DEFINE_TRACE(name, COUNT_ARGS(args))
 
 #undef TRACE_INCLUDE
 #undef __TRACE_INCLUDE
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 07/11] tracepoint: introduce kernel_tracepoint_find_by_name
  2018-03-27  2:46 ` Alexei Starovoitov
@ 2018-03-27  2:47   ` Alexei Starovoitov
  -1 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:47 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

introduce kernel_tracepoint_find_by_name() helper to let bpf core
find tracepoint by name and later attach bpf probe to a tracepoint

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 include/linux/tracepoint.h | 6 ++++++
 kernel/tracepoint.c        | 9 +++++++++
 2 files changed, 15 insertions(+)

diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index c92f4adbc0d7..a00b84473211 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -43,6 +43,12 @@ tracepoint_probe_unregister(struct tracepoint *tp, void *probe, void *data);
 extern void
 for_each_kernel_tracepoint(void (*fct)(struct tracepoint *tp, void *priv),
 		void *priv);
+#ifdef CONFIG_TRACEPOINTS
+struct tracepoint *kernel_tracepoint_find_by_name(const char *name);
+#else
+static inline struct tracepoint *
+kernel_tracepoint_find_by_name(const char *name) { return NULL; }
+#endif
 
 #ifdef CONFIG_MODULES
 struct tp_module {
diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
index 671b13457387..e2a9a0391ae2 100644
--- a/kernel/tracepoint.c
+++ b/kernel/tracepoint.c
@@ -528,6 +528,15 @@ void for_each_kernel_tracepoint(void (*fct)(struct tracepoint *tp, void *priv),
 }
 EXPORT_SYMBOL_GPL(for_each_kernel_tracepoint);
 
+struct tracepoint *kernel_tracepoint_find_by_name(const char *name)
+{
+	struct tracepoint * const *tp = __start___tracepoints_ptrs;
+
+	for (; tp < __stop___tracepoints_ptrs; tp++)
+		if (!strcmp((*tp)->name, name))
+			return *tp;
+	return NULL;
+}
 #ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
 
 /* NB: reg/unreg are called while guarded with the tracepoints_mutex */
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 07/11] tracepoint: introduce kernel_tracepoint_find_by_name
@ 2018-03-27  2:47   ` Alexei Starovoitov
  0 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:47 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

introduce kernel_tracepoint_find_by_name() helper to let bpf core
find tracepoint by name and later attach bpf probe to a tracepoint

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 include/linux/tracepoint.h | 6 ++++++
 kernel/tracepoint.c        | 9 +++++++++
 2 files changed, 15 insertions(+)

diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index c92f4adbc0d7..a00b84473211 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -43,6 +43,12 @@ tracepoint_probe_unregister(struct tracepoint *tp, void *probe, void *data);
 extern void
 for_each_kernel_tracepoint(void (*fct)(struct tracepoint *tp, void *priv),
 		void *priv);
+#ifdef CONFIG_TRACEPOINTS
+struct tracepoint *kernel_tracepoint_find_by_name(const char *name);
+#else
+static inline struct tracepoint *
+kernel_tracepoint_find_by_name(const char *name) { return NULL; }
+#endif
 
 #ifdef CONFIG_MODULES
 struct tp_module {
diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
index 671b13457387..e2a9a0391ae2 100644
--- a/kernel/tracepoint.c
+++ b/kernel/tracepoint.c
@@ -528,6 +528,15 @@ void for_each_kernel_tracepoint(void (*fct)(struct tracepoint *tp, void *priv),
 }
 EXPORT_SYMBOL_GPL(for_each_kernel_tracepoint);
 
+struct tracepoint *kernel_tracepoint_find_by_name(const char *name)
+{
+	struct tracepoint * const *tp = __start___tracepoints_ptrs;
+
+	for (; tp < __stop___tracepoints_ptrs; tp++)
+		if (!strcmp((*tp)->name, name))
+			return *tp;
+	return NULL;
+}
 #ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
 
 /* NB: reg/unreg are called while guarded with the tracepoints_mutex */
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
  2018-03-27  2:46 ` Alexei Starovoitov
@ 2018-03-27  2:47   ` Alexei Starovoitov
  -1 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:47 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

Introduce BPF_PROG_TYPE_RAW_TRACEPOINT bpf program type to access
kernel internal arguments of the tracepoints in their raw form.

>From bpf program point of view the access to the arguments look like:
struct bpf_raw_tracepoint_args {
       __u64 args[0];
};

int bpf_prog(struct bpf_raw_tracepoint_args *ctx)
{
  // program can read args[N] where N depends on tracepoint
  // and statically verified at program load+attach time
}

kprobe+bpf infrastructure allows programs access function arguments.
This feature allows programs access raw tracepoint arguments.

Similar to proposed 'dynamic ftrace events' there are no abi guarantees
to what the tracepoints arguments are and what their meaning is.
The program needs to type cast args properly and use bpf_probe_read()
helper to access struct fields when argument is a pointer.

For every tracepoint __bpf_trace_##call function is prepared.
In assembler it looks like:
(gdb) disassemble __bpf_trace_xdp_exception
Dump of assembler code for function __bpf_trace_xdp_exception:
   0xffffffff81132080 <+0>:     mov    %ecx,%ecx
   0xffffffff81132082 <+2>:     jmpq   0xffffffff811231f0 <bpf_trace_run3>

where

TRACE_EVENT(xdp_exception,
        TP_PROTO(const struct net_device *dev,
                 const struct bpf_prog *xdp, u32 act),

The above assembler snippet is casting 32-bit 'act' field into 'u64'
to pass into bpf_trace_run3(), while 'dev' and 'xdp' args are passed as-is.
All of ~500 of __bpf_trace_*() functions are only 5-10 byte long
and in total this approach adds 7k bytes to .text and 8k bytes
to .rodata since the probe funcs need to appear in kallsyms.
The alternative of having __bpf_trace_##call being global in kallsyms
could have been to keep them static and add another pointer to these
static functions to 'struct trace_event_class' and 'struct trace_event_call',
but keeping them global simplifies implementation and keeps it indepedent
from the tracing side.

Also such approach gives the lowest possible overhead
while calling trace_xdp_exception() from kernel C code and
transitioning into bpf land.
Since tracepoint+bpf are used at speeds of 1M+ events per second
this is very valuable optimization.

Since ftrace and perf side are not involved the new
BPF_RAW_TRACEPOINT_OPEN sys_bpf command is introduced
that returns anon_inode FD of 'bpf-raw-tracepoint' object.

The user space looks like:
// load bpf prog with BPF_PROG_TYPE_RAW_TRACEPOINT type
prog_fd = bpf_prog_load(...);
// receive anon_inode fd for given bpf_raw_tracepoint with prog attached
raw_tp_fd = bpf_raw_tracepoint_open("xdp_exception", prog_fd);

Ctrl-C of tracing daemon or cmdline tool that uses this feature
will automatically detach bpf program, unload it and
unregister tracepoint probe.

On the kernel side for_each_kernel_tracepoint() is used
to find a tracepoint with "xdp_exception" name
(that would be __tracepoint_xdp_exception record)

Then kallsyms_lookup_name() is used to find the addr
of __bpf_trace_xdp_exception() probe function.

And finally tracepoint_probe_register() is used to connect probe
with tracepoint.

Addition of bpf_raw_tracepoint doesn't interfere with ftrace and perf
tracepoint mechanisms. perf_event_open() can be used in parallel
on the same tracepoint.
Multiple bpf_raw_tracepoint_open("xdp_exception", prog_fd) are permitted.
Each with its own bpf program. The kernel will execute
all tracepoint probes and all attached bpf programs.

In the future bpf_raw_tracepoints can be extended with
query/introspection logic.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 include/linux/bpf_types.h    |   1 +
 include/linux/trace_events.h |  37 +++++++++
 include/trace/bpf_probe.h    |  87 ++++++++++++++++++++
 include/trace/define_trace.h |   1 +
 include/uapi/linux/bpf.h     |  11 +++
 kernel/bpf/syscall.c         |  78 ++++++++++++++++++
 kernel/trace/bpf_trace.c     | 188 +++++++++++++++++++++++++++++++++++++++++++
 7 files changed, 403 insertions(+)
 create mode 100644 include/trace/bpf_probe.h

diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
index 5e2e8a49fb21..6d7243bfb0ff 100644
--- a/include/linux/bpf_types.h
+++ b/include/linux/bpf_types.h
@@ -19,6 +19,7 @@ BPF_PROG_TYPE(BPF_PROG_TYPE_SK_MSG, sk_msg)
 BPF_PROG_TYPE(BPF_PROG_TYPE_KPROBE, kprobe)
 BPF_PROG_TYPE(BPF_PROG_TYPE_TRACEPOINT, tracepoint)
 BPF_PROG_TYPE(BPF_PROG_TYPE_PERF_EVENT, perf_event)
+BPF_PROG_TYPE(BPF_PROG_TYPE_RAW_TRACEPOINT, raw_tracepoint)
 #endif
 #ifdef CONFIG_CGROUP_BPF
 BPF_PROG_TYPE(BPF_PROG_TYPE_CGROUP_DEVICE, cg_dev)
diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 8a1442c4e513..e37fcd7505da 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -468,6 +468,8 @@ unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx);
 int perf_event_attach_bpf_prog(struct perf_event *event, struct bpf_prog *prog);
 void perf_event_detach_bpf_prog(struct perf_event *event);
 int perf_event_query_prog_array(struct perf_event *event, void __user *info);
+int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog);
+int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog);
 #else
 static inline unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx)
 {
@@ -487,6 +489,14 @@ perf_event_query_prog_array(struct perf_event *event, void __user *info)
 {
 	return -EOPNOTSUPP;
 }
+static inline int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *p)
+{
+	return -EOPNOTSUPP;
+}
+static inline int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *p)
+{
+	return -EOPNOTSUPP;
+}
 #endif
 
 enum {
@@ -546,6 +556,33 @@ extern void ftrace_profile_free_filter(struct perf_event *event);
 void perf_trace_buf_update(void *record, u16 type);
 void *perf_trace_buf_alloc(int size, struct pt_regs **regs, int *rctxp);
 
+void bpf_trace_run1(struct bpf_prog *prog, u64 arg1);
+void bpf_trace_run2(struct bpf_prog *prog, u64 arg1, u64 arg2);
+void bpf_trace_run3(struct bpf_prog *prog, u64 arg1, u64 arg2,
+		    u64 arg3);
+void bpf_trace_run4(struct bpf_prog *prog, u64 arg1, u64 arg2,
+		    u64 arg3, u64 arg4);
+void bpf_trace_run5(struct bpf_prog *prog, u64 arg1, u64 arg2,
+		    u64 arg3, u64 arg4, u64 arg5);
+void bpf_trace_run6(struct bpf_prog *prog, u64 arg1, u64 arg2,
+		    u64 arg3, u64 arg4, u64 arg5, u64 arg6);
+void bpf_trace_run7(struct bpf_prog *prog, u64 arg1, u64 arg2,
+		    u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7);
+void bpf_trace_run8(struct bpf_prog *prog, u64 arg1, u64 arg2,
+		    u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7,
+		    u64 arg8);
+void bpf_trace_run9(struct bpf_prog *prog, u64 arg1, u64 arg2,
+		    u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7,
+		    u64 arg8, u64 arg9);
+void bpf_trace_run10(struct bpf_prog *prog, u64 arg1, u64 arg2,
+		     u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7,
+		     u64 arg8, u64 arg9, u64 arg10);
+void bpf_trace_run11(struct bpf_prog *prog, u64 arg1, u64 arg2,
+		     u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7,
+		     u64 arg8, u64 arg9, u64 arg10, u64 arg11);
+void bpf_trace_run12(struct bpf_prog *prog, u64 arg1, u64 arg2,
+		     u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7,
+		     u64 arg8, u64 arg9, u64 arg10, u64 arg11, u64 arg12);
 void perf_trace_run_bpf_submit(void *raw_data, int size, int rctx,
 			       struct trace_event_call *call, u64 count,
 			       struct pt_regs *regs, struct hlist_head *head,
diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h
new file mode 100644
index 000000000000..d2cc0663e618
--- /dev/null
+++ b/include/trace/bpf_probe.h
@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#undef TRACE_SYSTEM_VAR
+
+#ifdef CONFIG_BPF_EVENTS
+
+#undef __entry
+#define __entry entry
+
+#undef __get_dynamic_array
+#define __get_dynamic_array(field)	\
+		((void *)__entry + (__entry->__data_loc_##field & 0xffff))
+
+#undef __get_dynamic_array_len
+#define __get_dynamic_array_len(field)	\
+		((__entry->__data_loc_##field >> 16) & 0xffff)
+
+#undef __get_str
+#define __get_str(field) ((char *)__get_dynamic_array(field))
+
+#undef __get_bitmask
+#define __get_bitmask(field) (char *)__get_dynamic_array(field)
+
+#undef __perf_count
+#define __perf_count(c)	(c)
+
+#undef __perf_task
+#define __perf_task(t)	(t)
+
+/* cast any integer, pointer, or small struct to u64 */
+#define UINTTYPE(size) \
+	__typeof__(__builtin_choose_expr(size == 1,  (u8)1, \
+		   __builtin_choose_expr(size == 2, (u16)2, \
+		   __builtin_choose_expr(size == 4, (u32)3, \
+		   __builtin_choose_expr(size == 8, (u64)4, \
+					 (void)5)))))
+#define __CAST_TO_U64(x) ({ \
+	typeof(x) __src = (x); \
+	UINTTYPE(sizeof(x)) __dst; \
+	memcpy(&__dst, &__src, sizeof(__dst)); \
+	(u64)__dst; })
+
+#define __CAST1(a,...) __CAST_TO_U64(a)
+#define __CAST2(a,...) __CAST_TO_U64(a), __CAST1(__VA_ARGS__)
+#define __CAST3(a,...) __CAST_TO_U64(a), __CAST2(__VA_ARGS__)
+#define __CAST4(a,...) __CAST_TO_U64(a), __CAST3(__VA_ARGS__)
+#define __CAST5(a,...) __CAST_TO_U64(a), __CAST4(__VA_ARGS__)
+#define __CAST6(a,...) __CAST_TO_U64(a), __CAST5(__VA_ARGS__)
+#define __CAST7(a,...) __CAST_TO_U64(a), __CAST6(__VA_ARGS__)
+#define __CAST8(a,...) __CAST_TO_U64(a), __CAST7(__VA_ARGS__)
+#define __CAST9(a,...) __CAST_TO_U64(a), __CAST8(__VA_ARGS__)
+#define __CAST10(a,...) __CAST_TO_U64(a), __CAST9(__VA_ARGS__)
+#define __CAST11(a,...) __CAST_TO_U64(a), __CAST10(__VA_ARGS__)
+#define __CAST12(a,...) __CAST_TO_U64(a), __CAST11(__VA_ARGS__)
+/* tracepoints with more than 12 arguments will hit build error */
+#define CAST_TO_U64(...) CONCATENATE(__CAST, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__)
+
+#undef DECLARE_EVENT_CLASS
+#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print)	\
+/* no 'static' here. The bpf probe functions are global */		\
+notrace void								\
+__bpf_trace_##call(void *__data, proto)					\
+{									\
+	struct bpf_prog *prog = __data;					\
+	\
+	CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(prog, CAST_TO_U64(args));	\
+}
+
+/*
+ * This part is compiled out, it is only here as a build time check
+ * to make sure that if the tracepoint handling changes, the
+ * bpf probe will fail to compile unless it too is updated.
+ */
+#undef DEFINE_EVENT
+#define DEFINE_EVENT(template, call, proto, args)			\
+static inline void bpf_test_probe_##call(void)				\
+{									\
+	check_trace_callback_type_##call(__bpf_trace_##template);	\
+}
+
+
+#undef DEFINE_EVENT_PRINT
+#define DEFINE_EVENT_PRINT(template, name, proto, args, print)	\
+	DEFINE_EVENT(template, name, PARAMS(proto), PARAMS(args))
+
+#include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
+#endif /* CONFIG_BPF_EVENTS */
diff --git a/include/trace/define_trace.h b/include/trace/define_trace.h
index 96b22ace9ae7..5f8216bc261f 100644
--- a/include/trace/define_trace.h
+++ b/include/trace/define_trace.h
@@ -95,6 +95,7 @@
 #ifdef TRACEPOINTS_ENABLED
 #include <trace/trace_events.h>
 #include <trace/perf.h>
+#include <trace/bpf_probe.h>
 #endif
 
 #undef TRACE_EVENT
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 18b7c510c511..1878201c2d77 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -94,6 +94,7 @@ enum bpf_cmd {
 	BPF_MAP_GET_FD_BY_ID,
 	BPF_OBJ_GET_INFO_BY_FD,
 	BPF_PROG_QUERY,
+	BPF_RAW_TRACEPOINT_OPEN,
 };
 
 enum bpf_map_type {
@@ -134,6 +135,7 @@ enum bpf_prog_type {
 	BPF_PROG_TYPE_SK_SKB,
 	BPF_PROG_TYPE_CGROUP_DEVICE,
 	BPF_PROG_TYPE_SK_MSG,
+	BPF_PROG_TYPE_RAW_TRACEPOINT,
 };
 
 enum bpf_attach_type {
@@ -344,6 +346,11 @@ union bpf_attr {
 		__aligned_u64	prog_ids;
 		__u32		prog_cnt;
 	} query;
+
+	struct {
+		__u64 name;
+		__u32 prog_fd;
+	} raw_tracepoint;
 } __attribute__((aligned(8)));
 
 /* BPF helper function descriptions:
@@ -1152,4 +1159,8 @@ struct bpf_cgroup_dev_ctx {
 	__u32 minor;
 };
 
+struct bpf_raw_tracepoint_args {
+	__u64 args[0];
+};
+
 #endif /* _UAPI__LINUX_BPF_H__ */
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 3aeb4ea2a93a..7486b450672e 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1311,6 +1311,81 @@ static int bpf_obj_get(const union bpf_attr *attr)
 				attr->file_flags);
 }
 
+struct bpf_raw_tracepoint {
+	struct tracepoint *tp;
+	struct bpf_prog *prog;
+};
+
+static int bpf_raw_tracepoint_release(struct inode *inode, struct file *filp)
+{
+	struct bpf_raw_tracepoint *raw_tp = filp->private_data;
+
+	if (raw_tp->prog) {
+		bpf_probe_unregister(raw_tp->tp, raw_tp->prog);
+		bpf_prog_put(raw_tp->prog);
+	}
+	kfree(raw_tp);
+	return 0;
+}
+
+static const struct file_operations bpf_raw_tp_fops = {
+	.release	= bpf_raw_tracepoint_release,
+	.read		= bpf_dummy_read,
+	.write		= bpf_dummy_write,
+};
+
+#define BPF_RAW_TRACEPOINT_OPEN_LAST_FIELD raw_tracepoint.prog_fd
+
+static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
+{
+	struct bpf_raw_tracepoint *raw_tp;
+	struct tracepoint *tp;
+	struct bpf_prog *prog;
+	char tp_name[128];
+	int tp_fd, err;
+
+	if (strncpy_from_user(tp_name, u64_to_user_ptr(attr->raw_tracepoint.name),
+			      sizeof(tp_name) - 1) < 0)
+		return -EFAULT;
+	tp_name[sizeof(tp_name) - 1] = 0;
+
+	tp = kernel_tracepoint_find_by_name(tp_name);
+	if (!tp)
+		return -ENOENT;
+
+	raw_tp = kmalloc(sizeof(*raw_tp), GFP_USER | __GFP_ZERO);
+	if (!raw_tp)
+		return -ENOMEM;
+	raw_tp->tp = tp;
+
+	prog = bpf_prog_get_type(attr->raw_tracepoint.prog_fd,
+				 BPF_PROG_TYPE_RAW_TRACEPOINT);
+	if (IS_ERR(prog)) {
+		err = PTR_ERR(prog);
+		goto out_free_tp;
+	}
+
+	err = bpf_probe_register(raw_tp->tp, prog);
+	if (err)
+		goto out_put_prog;
+
+	raw_tp->prog = prog;
+	tp_fd = anon_inode_getfd("bpf-raw-tracepoint", &bpf_raw_tp_fops, raw_tp,
+				 O_CLOEXEC);
+	if (tp_fd < 0) {
+		bpf_probe_unregister(raw_tp->tp, prog);
+		err = tp_fd;
+		goto out_put_prog;
+	}
+	return tp_fd;
+
+out_put_prog:
+	bpf_prog_put(prog);
+out_free_tp:
+	kfree(raw_tp);
+	return err;
+}
+
 #ifdef CONFIG_CGROUP_BPF
 
 #define BPF_PROG_ATTACH_LAST_FIELD attach_flags
@@ -1921,6 +1996,9 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
 	case BPF_OBJ_GET_INFO_BY_FD:
 		err = bpf_obj_get_info_by_fd(&attr, uattr);
 		break;
+	case BPF_RAW_TRACEPOINT_OPEN:
+		err = bpf_raw_tracepoint_open(&attr);
+		break;
 	default:
 		err = -EINVAL;
 		break;
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index c634e093951f..00e86aa11360 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -723,6 +723,86 @@ const struct bpf_verifier_ops tracepoint_verifier_ops = {
 const struct bpf_prog_ops tracepoint_prog_ops = {
 };
 
+/*
+ * bpf_raw_tp_regs are separate from bpf_pt_regs used from skb/xdp
+ * to avoid potential recursive reuse issue when/if tracepoints are added
+ * inside bpf_*_event_output and/or bpf_get_stack_id
+ */
+static DEFINE_PER_CPU(struct pt_regs, bpf_raw_tp_regs);
+BPF_CALL_5(bpf_perf_event_output_raw_tp, struct bpf_raw_tracepoint_args *, args,
+	   struct bpf_map *, map, u64, flags, void *, data, u64, size)
+{
+	struct pt_regs *regs = this_cpu_ptr(&bpf_raw_tp_regs);
+
+	perf_fetch_caller_regs(regs);
+	return ____bpf_perf_event_output(regs, map, flags, data, size);
+}
+
+static const struct bpf_func_proto bpf_perf_event_output_proto_raw_tp = {
+	.func		= bpf_perf_event_output_raw_tp,
+	.gpl_only	= true,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+	.arg2_type	= ARG_CONST_MAP_PTR,
+	.arg3_type	= ARG_ANYTHING,
+	.arg4_type	= ARG_PTR_TO_MEM,
+	.arg5_type	= ARG_CONST_SIZE_OR_ZERO,
+};
+
+BPF_CALL_3(bpf_get_stackid_raw_tp, struct bpf_raw_tracepoint_args *, args,
+	   struct bpf_map *, map, u64, flags)
+{
+	struct pt_regs *regs = this_cpu_ptr(&bpf_raw_tp_regs);
+
+	perf_fetch_caller_regs(regs);
+	/* similar to bpf_perf_event_output_tp, but pt_regs fetched differently */
+	return bpf_get_stackid((unsigned long) regs, (unsigned long) map,
+			       flags, 0, 0);
+}
+
+static const struct bpf_func_proto bpf_get_stackid_proto_raw_tp = {
+	.func		= bpf_get_stackid_raw_tp,
+	.gpl_only	= true,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+	.arg2_type	= ARG_CONST_MAP_PTR,
+	.arg3_type	= ARG_ANYTHING,
+};
+
+static const struct bpf_func_proto *raw_tp_prog_func_proto(enum bpf_func_id func_id)
+{
+	switch (func_id) {
+	case BPF_FUNC_perf_event_output:
+		return &bpf_perf_event_output_proto_raw_tp;
+	case BPF_FUNC_get_stackid:
+		return &bpf_get_stackid_proto_raw_tp;
+	default:
+		return tracing_func_proto(func_id);
+	}
+}
+
+static bool raw_tp_prog_is_valid_access(int off, int size,
+					enum bpf_access_type type,
+					struct bpf_insn_access_aux *info)
+{
+	/* largest tracepoint in the kernel has 12 args */
+	if (off < 0 || off >= sizeof(__u64) * 12)
+		return false;
+	if (type != BPF_READ)
+		return false;
+	if (off % size != 0)
+		return false;
+	return true;
+}
+
+const struct bpf_verifier_ops raw_tracepoint_verifier_ops = {
+	.get_func_proto  = raw_tp_prog_func_proto,
+	.is_valid_access = raw_tp_prog_is_valid_access,
+};
+
+const struct bpf_prog_ops raw_tracepoint_prog_ops = {
+};
+
 static bool pe_prog_is_valid_access(int off, int size, enum bpf_access_type type,
 				    struct bpf_insn_access_aux *info)
 {
@@ -896,3 +976,111 @@ int perf_event_query_prog_array(struct perf_event *event, void __user *info)
 
 	return ret;
 }
+
+static __always_inline
+void __bpf_trace_run(struct bpf_prog *prog, u64 *args)
+{
+	rcu_read_lock();
+	preempt_disable();
+	(void) BPF_PROG_RUN(prog, args);
+	preempt_enable();
+	rcu_read_unlock();
+}
+
+#define UNPACK(...)			__VA_ARGS__
+#define REPEAT_1(FN, DL, X, ...)	FN(X)
+#define REPEAT_2(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_1(FN, DL, __VA_ARGS__)
+#define REPEAT_3(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_2(FN, DL, __VA_ARGS__)
+#define REPEAT_4(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_3(FN, DL, __VA_ARGS__)
+#define REPEAT_5(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_4(FN, DL, __VA_ARGS__)
+#define REPEAT_6(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_5(FN, DL, __VA_ARGS__)
+#define REPEAT_7(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_6(FN, DL, __VA_ARGS__)
+#define REPEAT_8(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_7(FN, DL, __VA_ARGS__)
+#define REPEAT_9(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_8(FN, DL, __VA_ARGS__)
+#define REPEAT_10(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_9(FN, DL, __VA_ARGS__)
+#define REPEAT_11(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_10(FN, DL, __VA_ARGS__)
+#define REPEAT_12(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_11(FN, DL, __VA_ARGS__)
+#define REPEAT(X, FN, DL, ...)		REPEAT_##X(FN, DL, __VA_ARGS__)
+
+#define SARG(X)		u64 arg##X
+#define COPY(X)		args[X] = arg##X
+
+#define __DL_COM	(,)
+#define __DL_SEM	(;)
+
+#define __SEQ_0_11	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
+
+#define BPF_TRACE_DEFN_x(x)						\
+	void bpf_trace_run##x(struct bpf_prog *prog,			\
+			      REPEAT(x, SARG, __DL_COM, __SEQ_0_11))	\
+	{								\
+		u64 args[x];						\
+		REPEAT(x, COPY, __DL_SEM, __SEQ_0_11);			\
+		__bpf_trace_run(prog, args);				\
+	}								\
+	EXPORT_SYMBOL_GPL(bpf_trace_run##x)
+BPF_TRACE_DEFN_x(1);
+BPF_TRACE_DEFN_x(2);
+BPF_TRACE_DEFN_x(3);
+BPF_TRACE_DEFN_x(4);
+BPF_TRACE_DEFN_x(5);
+BPF_TRACE_DEFN_x(6);
+BPF_TRACE_DEFN_x(7);
+BPF_TRACE_DEFN_x(8);
+BPF_TRACE_DEFN_x(9);
+BPF_TRACE_DEFN_x(10);
+BPF_TRACE_DEFN_x(11);
+BPF_TRACE_DEFN_x(12);
+
+static int __bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog)
+{
+	unsigned long addr;
+	char buf[128];
+
+	/*
+	 * check that program doesn't access arguments beyond what's
+	 * available in this tracepoint
+	 */
+	if (prog->aux->max_ctx_offset > tp->num_args * sizeof(u64))
+		return -EINVAL;
+
+	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
+	addr = kallsyms_lookup_name(buf);
+	if (!addr)
+		return -ENOENT;
+
+	return tracepoint_probe_register(tp, (void *)addr, prog);
+}
+
+int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog)
+{
+	int err;
+
+	mutex_lock(&bpf_event_mutex);
+	err = __bpf_probe_register(tp, prog);
+	mutex_unlock(&bpf_event_mutex);
+	return err;
+}
+
+static int __bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog)
+{
+	unsigned long addr;
+	char buf[128];
+
+	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
+	addr = kallsyms_lookup_name(buf);
+	if (!addr)
+		return -ENOENT;
+
+	return tracepoint_probe_unregister(tp, (void *)addr, prog);
+}
+
+int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog)
+{
+	int err;
+
+	mutex_lock(&bpf_event_mutex);
+	err = __bpf_probe_unregister(tp, prog);
+	mutex_unlock(&bpf_event_mutex);
+	return err;
+}
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
@ 2018-03-27  2:47   ` Alexei Starovoitov
  0 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:47 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

Introduce BPF_PROG_TYPE_RAW_TRACEPOINT bpf program type to access
kernel internal arguments of the tracepoints in their raw form.

>From bpf program point of view the access to the arguments look like:
struct bpf_raw_tracepoint_args {
       __u64 args[0];
};

int bpf_prog(struct bpf_raw_tracepoint_args *ctx)
{
  // program can read args[N] where N depends on tracepoint
  // and statically verified at program load+attach time
}

kprobe+bpf infrastructure allows programs access function arguments.
This feature allows programs access raw tracepoint arguments.

Similar to proposed 'dynamic ftrace events' there are no abi guarantees
to what the tracepoints arguments are and what their meaning is.
The program needs to type cast args properly and use bpf_probe_read()
helper to access struct fields when argument is a pointer.

For every tracepoint __bpf_trace_##call function is prepared.
In assembler it looks like:
(gdb) disassemble __bpf_trace_xdp_exception
Dump of assembler code for function __bpf_trace_xdp_exception:
   0xffffffff81132080 <+0>:     mov    %ecx,%ecx
   0xffffffff81132082 <+2>:     jmpq   0xffffffff811231f0 <bpf_trace_run3>

where

TRACE_EVENT(xdp_exception,
        TP_PROTO(const struct net_device *dev,
                 const struct bpf_prog *xdp, u32 act),

The above assembler snippet is casting 32-bit 'act' field into 'u64'
to pass into bpf_trace_run3(), while 'dev' and 'xdp' args are passed as-is.
All of ~500 of __bpf_trace_*() functions are only 5-10 byte long
and in total this approach adds 7k bytes to .text and 8k bytes
to .rodata since the probe funcs need to appear in kallsyms.
The alternative of having __bpf_trace_##call being global in kallsyms
could have been to keep them static and add another pointer to these
static functions to 'struct trace_event_class' and 'struct trace_event_call',
but keeping them global simplifies implementation and keeps it indepedent
from the tracing side.

Also such approach gives the lowest possible overhead
while calling trace_xdp_exception() from kernel C code and
transitioning into bpf land.
Since tracepoint+bpf are used at speeds of 1M+ events per second
this is very valuable optimization.

Since ftrace and perf side are not involved the new
BPF_RAW_TRACEPOINT_OPEN sys_bpf command is introduced
that returns anon_inode FD of 'bpf-raw-tracepoint' object.

The user space looks like:
// load bpf prog with BPF_PROG_TYPE_RAW_TRACEPOINT type
prog_fd = bpf_prog_load(...);
// receive anon_inode fd for given bpf_raw_tracepoint with prog attached
raw_tp_fd = bpf_raw_tracepoint_open("xdp_exception", prog_fd);

Ctrl-C of tracing daemon or cmdline tool that uses this feature
will automatically detach bpf program, unload it and
unregister tracepoint probe.

On the kernel side for_each_kernel_tracepoint() is used
to find a tracepoint with "xdp_exception" name
(that would be __tracepoint_xdp_exception record)

Then kallsyms_lookup_name() is used to find the addr
of __bpf_trace_xdp_exception() probe function.

And finally tracepoint_probe_register() is used to connect probe
with tracepoint.

Addition of bpf_raw_tracepoint doesn't interfere with ftrace and perf
tracepoint mechanisms. perf_event_open() can be used in parallel
on the same tracepoint.
Multiple bpf_raw_tracepoint_open("xdp_exception", prog_fd) are permitted.
Each with its own bpf program. The kernel will execute
all tracepoint probes and all attached bpf programs.

In the future bpf_raw_tracepoints can be extended with
query/introspection logic.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 include/linux/bpf_types.h    |   1 +
 include/linux/trace_events.h |  37 +++++++++
 include/trace/bpf_probe.h    |  87 ++++++++++++++++++++
 include/trace/define_trace.h |   1 +
 include/uapi/linux/bpf.h     |  11 +++
 kernel/bpf/syscall.c         |  78 ++++++++++++++++++
 kernel/trace/bpf_trace.c     | 188 +++++++++++++++++++++++++++++++++++++++++++
 7 files changed, 403 insertions(+)
 create mode 100644 include/trace/bpf_probe.h

diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
index 5e2e8a49fb21..6d7243bfb0ff 100644
--- a/include/linux/bpf_types.h
+++ b/include/linux/bpf_types.h
@@ -19,6 +19,7 @@ BPF_PROG_TYPE(BPF_PROG_TYPE_SK_MSG, sk_msg)
 BPF_PROG_TYPE(BPF_PROG_TYPE_KPROBE, kprobe)
 BPF_PROG_TYPE(BPF_PROG_TYPE_TRACEPOINT, tracepoint)
 BPF_PROG_TYPE(BPF_PROG_TYPE_PERF_EVENT, perf_event)
+BPF_PROG_TYPE(BPF_PROG_TYPE_RAW_TRACEPOINT, raw_tracepoint)
 #endif
 #ifdef CONFIG_CGROUP_BPF
 BPF_PROG_TYPE(BPF_PROG_TYPE_CGROUP_DEVICE, cg_dev)
diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 8a1442c4e513..e37fcd7505da 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -468,6 +468,8 @@ unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx);
 int perf_event_attach_bpf_prog(struct perf_event *event, struct bpf_prog *prog);
 void perf_event_detach_bpf_prog(struct perf_event *event);
 int perf_event_query_prog_array(struct perf_event *event, void __user *info);
+int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog);
+int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog);
 #else
 static inline unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx)
 {
@@ -487,6 +489,14 @@ perf_event_query_prog_array(struct perf_event *event, void __user *info)
 {
 	return -EOPNOTSUPP;
 }
+static inline int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *p)
+{
+	return -EOPNOTSUPP;
+}
+static inline int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *p)
+{
+	return -EOPNOTSUPP;
+}
 #endif
 
 enum {
@@ -546,6 +556,33 @@ extern void ftrace_profile_free_filter(struct perf_event *event);
 void perf_trace_buf_update(void *record, u16 type);
 void *perf_trace_buf_alloc(int size, struct pt_regs **regs, int *rctxp);
 
+void bpf_trace_run1(struct bpf_prog *prog, u64 arg1);
+void bpf_trace_run2(struct bpf_prog *prog, u64 arg1, u64 arg2);
+void bpf_trace_run3(struct bpf_prog *prog, u64 arg1, u64 arg2,
+		    u64 arg3);
+void bpf_trace_run4(struct bpf_prog *prog, u64 arg1, u64 arg2,
+		    u64 arg3, u64 arg4);
+void bpf_trace_run5(struct bpf_prog *prog, u64 arg1, u64 arg2,
+		    u64 arg3, u64 arg4, u64 arg5);
+void bpf_trace_run6(struct bpf_prog *prog, u64 arg1, u64 arg2,
+		    u64 arg3, u64 arg4, u64 arg5, u64 arg6);
+void bpf_trace_run7(struct bpf_prog *prog, u64 arg1, u64 arg2,
+		    u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7);
+void bpf_trace_run8(struct bpf_prog *prog, u64 arg1, u64 arg2,
+		    u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7,
+		    u64 arg8);
+void bpf_trace_run9(struct bpf_prog *prog, u64 arg1, u64 arg2,
+		    u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7,
+		    u64 arg8, u64 arg9);
+void bpf_trace_run10(struct bpf_prog *prog, u64 arg1, u64 arg2,
+		     u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7,
+		     u64 arg8, u64 arg9, u64 arg10);
+void bpf_trace_run11(struct bpf_prog *prog, u64 arg1, u64 arg2,
+		     u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7,
+		     u64 arg8, u64 arg9, u64 arg10, u64 arg11);
+void bpf_trace_run12(struct bpf_prog *prog, u64 arg1, u64 arg2,
+		     u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7,
+		     u64 arg8, u64 arg9, u64 arg10, u64 arg11, u64 arg12);
 void perf_trace_run_bpf_submit(void *raw_data, int size, int rctx,
 			       struct trace_event_call *call, u64 count,
 			       struct pt_regs *regs, struct hlist_head *head,
diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h
new file mode 100644
index 000000000000..d2cc0663e618
--- /dev/null
+++ b/include/trace/bpf_probe.h
@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#undef TRACE_SYSTEM_VAR
+
+#ifdef CONFIG_BPF_EVENTS
+
+#undef __entry
+#define __entry entry
+
+#undef __get_dynamic_array
+#define __get_dynamic_array(field)	\
+		((void *)__entry + (__entry->__data_loc_##field & 0xffff))
+
+#undef __get_dynamic_array_len
+#define __get_dynamic_array_len(field)	\
+		((__entry->__data_loc_##field >> 16) & 0xffff)
+
+#undef __get_str
+#define __get_str(field) ((char *)__get_dynamic_array(field))
+
+#undef __get_bitmask
+#define __get_bitmask(field) (char *)__get_dynamic_array(field)
+
+#undef __perf_count
+#define __perf_count(c)	(c)
+
+#undef __perf_task
+#define __perf_task(t)	(t)
+
+/* cast any integer, pointer, or small struct to u64 */
+#define UINTTYPE(size) \
+	__typeof__(__builtin_choose_expr(size == 1,  (u8)1, \
+		   __builtin_choose_expr(size == 2, (u16)2, \
+		   __builtin_choose_expr(size == 4, (u32)3, \
+		   __builtin_choose_expr(size == 8, (u64)4, \
+					 (void)5)))))
+#define __CAST_TO_U64(x) ({ \
+	typeof(x) __src = (x); \
+	UINTTYPE(sizeof(x)) __dst; \
+	memcpy(&__dst, &__src, sizeof(__dst)); \
+	(u64)__dst; })
+
+#define __CAST1(a,...) __CAST_TO_U64(a)
+#define __CAST2(a,...) __CAST_TO_U64(a), __CAST1(__VA_ARGS__)
+#define __CAST3(a,...) __CAST_TO_U64(a), __CAST2(__VA_ARGS__)
+#define __CAST4(a,...) __CAST_TO_U64(a), __CAST3(__VA_ARGS__)
+#define __CAST5(a,...) __CAST_TO_U64(a), __CAST4(__VA_ARGS__)
+#define __CAST6(a,...) __CAST_TO_U64(a), __CAST5(__VA_ARGS__)
+#define __CAST7(a,...) __CAST_TO_U64(a), __CAST6(__VA_ARGS__)
+#define __CAST8(a,...) __CAST_TO_U64(a), __CAST7(__VA_ARGS__)
+#define __CAST9(a,...) __CAST_TO_U64(a), __CAST8(__VA_ARGS__)
+#define __CAST10(a,...) __CAST_TO_U64(a), __CAST9(__VA_ARGS__)
+#define __CAST11(a,...) __CAST_TO_U64(a), __CAST10(__VA_ARGS__)
+#define __CAST12(a,...) __CAST_TO_U64(a), __CAST11(__VA_ARGS__)
+/* tracepoints with more than 12 arguments will hit build error */
+#define CAST_TO_U64(...) CONCATENATE(__CAST, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__)
+
+#undef DECLARE_EVENT_CLASS
+#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print)	\
+/* no 'static' here. The bpf probe functions are global */		\
+notrace void								\
+__bpf_trace_##call(void *__data, proto)					\
+{									\
+	struct bpf_prog *prog = __data;					\
+	\
+	CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(prog, CAST_TO_U64(args));	\
+}
+
+/*
+ * This part is compiled out, it is only here as a build time check
+ * to make sure that if the tracepoint handling changes, the
+ * bpf probe will fail to compile unless it too is updated.
+ */
+#undef DEFINE_EVENT
+#define DEFINE_EVENT(template, call, proto, args)			\
+static inline void bpf_test_probe_##call(void)				\
+{									\
+	check_trace_callback_type_##call(__bpf_trace_##template);	\
+}
+
+
+#undef DEFINE_EVENT_PRINT
+#define DEFINE_EVENT_PRINT(template, name, proto, args, print)	\
+	DEFINE_EVENT(template, name, PARAMS(proto), PARAMS(args))
+
+#include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
+#endif /* CONFIG_BPF_EVENTS */
diff --git a/include/trace/define_trace.h b/include/trace/define_trace.h
index 96b22ace9ae7..5f8216bc261f 100644
--- a/include/trace/define_trace.h
+++ b/include/trace/define_trace.h
@@ -95,6 +95,7 @@
 #ifdef TRACEPOINTS_ENABLED
 #include <trace/trace_events.h>
 #include <trace/perf.h>
+#include <trace/bpf_probe.h>
 #endif
 
 #undef TRACE_EVENT
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 18b7c510c511..1878201c2d77 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -94,6 +94,7 @@ enum bpf_cmd {
 	BPF_MAP_GET_FD_BY_ID,
 	BPF_OBJ_GET_INFO_BY_FD,
 	BPF_PROG_QUERY,
+	BPF_RAW_TRACEPOINT_OPEN,
 };
 
 enum bpf_map_type {
@@ -134,6 +135,7 @@ enum bpf_prog_type {
 	BPF_PROG_TYPE_SK_SKB,
 	BPF_PROG_TYPE_CGROUP_DEVICE,
 	BPF_PROG_TYPE_SK_MSG,
+	BPF_PROG_TYPE_RAW_TRACEPOINT,
 };
 
 enum bpf_attach_type {
@@ -344,6 +346,11 @@ union bpf_attr {
 		__aligned_u64	prog_ids;
 		__u32		prog_cnt;
 	} query;
+
+	struct {
+		__u64 name;
+		__u32 prog_fd;
+	} raw_tracepoint;
 } __attribute__((aligned(8)));
 
 /* BPF helper function descriptions:
@@ -1152,4 +1159,8 @@ struct bpf_cgroup_dev_ctx {
 	__u32 minor;
 };
 
+struct bpf_raw_tracepoint_args {
+	__u64 args[0];
+};
+
 #endif /* _UAPI__LINUX_BPF_H__ */
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 3aeb4ea2a93a..7486b450672e 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1311,6 +1311,81 @@ static int bpf_obj_get(const union bpf_attr *attr)
 				attr->file_flags);
 }
 
+struct bpf_raw_tracepoint {
+	struct tracepoint *tp;
+	struct bpf_prog *prog;
+};
+
+static int bpf_raw_tracepoint_release(struct inode *inode, struct file *filp)
+{
+	struct bpf_raw_tracepoint *raw_tp = filp->private_data;
+
+	if (raw_tp->prog) {
+		bpf_probe_unregister(raw_tp->tp, raw_tp->prog);
+		bpf_prog_put(raw_tp->prog);
+	}
+	kfree(raw_tp);
+	return 0;
+}
+
+static const struct file_operations bpf_raw_tp_fops = {
+	.release	= bpf_raw_tracepoint_release,
+	.read		= bpf_dummy_read,
+	.write		= bpf_dummy_write,
+};
+
+#define BPF_RAW_TRACEPOINT_OPEN_LAST_FIELD raw_tracepoint.prog_fd
+
+static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
+{
+	struct bpf_raw_tracepoint *raw_tp;
+	struct tracepoint *tp;
+	struct bpf_prog *prog;
+	char tp_name[128];
+	int tp_fd, err;
+
+	if (strncpy_from_user(tp_name, u64_to_user_ptr(attr->raw_tracepoint.name),
+			      sizeof(tp_name) - 1) < 0)
+		return -EFAULT;
+	tp_name[sizeof(tp_name) - 1] = 0;
+
+	tp = kernel_tracepoint_find_by_name(tp_name);
+	if (!tp)
+		return -ENOENT;
+
+	raw_tp = kmalloc(sizeof(*raw_tp), GFP_USER | __GFP_ZERO);
+	if (!raw_tp)
+		return -ENOMEM;
+	raw_tp->tp = tp;
+
+	prog = bpf_prog_get_type(attr->raw_tracepoint.prog_fd,
+				 BPF_PROG_TYPE_RAW_TRACEPOINT);
+	if (IS_ERR(prog)) {
+		err = PTR_ERR(prog);
+		goto out_free_tp;
+	}
+
+	err = bpf_probe_register(raw_tp->tp, prog);
+	if (err)
+		goto out_put_prog;
+
+	raw_tp->prog = prog;
+	tp_fd = anon_inode_getfd("bpf-raw-tracepoint", &bpf_raw_tp_fops, raw_tp,
+				 O_CLOEXEC);
+	if (tp_fd < 0) {
+		bpf_probe_unregister(raw_tp->tp, prog);
+		err = tp_fd;
+		goto out_put_prog;
+	}
+	return tp_fd;
+
+out_put_prog:
+	bpf_prog_put(prog);
+out_free_tp:
+	kfree(raw_tp);
+	return err;
+}
+
 #ifdef CONFIG_CGROUP_BPF
 
 #define BPF_PROG_ATTACH_LAST_FIELD attach_flags
@@ -1921,6 +1996,9 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
 	case BPF_OBJ_GET_INFO_BY_FD:
 		err = bpf_obj_get_info_by_fd(&attr, uattr);
 		break;
+	case BPF_RAW_TRACEPOINT_OPEN:
+		err = bpf_raw_tracepoint_open(&attr);
+		break;
 	default:
 		err = -EINVAL;
 		break;
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index c634e093951f..00e86aa11360 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -723,6 +723,86 @@ const struct bpf_verifier_ops tracepoint_verifier_ops = {
 const struct bpf_prog_ops tracepoint_prog_ops = {
 };
 
+/*
+ * bpf_raw_tp_regs are separate from bpf_pt_regs used from skb/xdp
+ * to avoid potential recursive reuse issue when/if tracepoints are added
+ * inside bpf_*_event_output and/or bpf_get_stack_id
+ */
+static DEFINE_PER_CPU(struct pt_regs, bpf_raw_tp_regs);
+BPF_CALL_5(bpf_perf_event_output_raw_tp, struct bpf_raw_tracepoint_args *, args,
+	   struct bpf_map *, map, u64, flags, void *, data, u64, size)
+{
+	struct pt_regs *regs = this_cpu_ptr(&bpf_raw_tp_regs);
+
+	perf_fetch_caller_regs(regs);
+	return ____bpf_perf_event_output(regs, map, flags, data, size);
+}
+
+static const struct bpf_func_proto bpf_perf_event_output_proto_raw_tp = {
+	.func		= bpf_perf_event_output_raw_tp,
+	.gpl_only	= true,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+	.arg2_type	= ARG_CONST_MAP_PTR,
+	.arg3_type	= ARG_ANYTHING,
+	.arg4_type	= ARG_PTR_TO_MEM,
+	.arg5_type	= ARG_CONST_SIZE_OR_ZERO,
+};
+
+BPF_CALL_3(bpf_get_stackid_raw_tp, struct bpf_raw_tracepoint_args *, args,
+	   struct bpf_map *, map, u64, flags)
+{
+	struct pt_regs *regs = this_cpu_ptr(&bpf_raw_tp_regs);
+
+	perf_fetch_caller_regs(regs);
+	/* similar to bpf_perf_event_output_tp, but pt_regs fetched differently */
+	return bpf_get_stackid((unsigned long) regs, (unsigned long) map,
+			       flags, 0, 0);
+}
+
+static const struct bpf_func_proto bpf_get_stackid_proto_raw_tp = {
+	.func		= bpf_get_stackid_raw_tp,
+	.gpl_only	= true,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+	.arg2_type	= ARG_CONST_MAP_PTR,
+	.arg3_type	= ARG_ANYTHING,
+};
+
+static const struct bpf_func_proto *raw_tp_prog_func_proto(enum bpf_func_id func_id)
+{
+	switch (func_id) {
+	case BPF_FUNC_perf_event_output:
+		return &bpf_perf_event_output_proto_raw_tp;
+	case BPF_FUNC_get_stackid:
+		return &bpf_get_stackid_proto_raw_tp;
+	default:
+		return tracing_func_proto(func_id);
+	}
+}
+
+static bool raw_tp_prog_is_valid_access(int off, int size,
+					enum bpf_access_type type,
+					struct bpf_insn_access_aux *info)
+{
+	/* largest tracepoint in the kernel has 12 args */
+	if (off < 0 || off >= sizeof(__u64) * 12)
+		return false;
+	if (type != BPF_READ)
+		return false;
+	if (off % size != 0)
+		return false;
+	return true;
+}
+
+const struct bpf_verifier_ops raw_tracepoint_verifier_ops = {
+	.get_func_proto  = raw_tp_prog_func_proto,
+	.is_valid_access = raw_tp_prog_is_valid_access,
+};
+
+const struct bpf_prog_ops raw_tracepoint_prog_ops = {
+};
+
 static bool pe_prog_is_valid_access(int off, int size, enum bpf_access_type type,
 				    struct bpf_insn_access_aux *info)
 {
@@ -896,3 +976,111 @@ int perf_event_query_prog_array(struct perf_event *event, void __user *info)
 
 	return ret;
 }
+
+static __always_inline
+void __bpf_trace_run(struct bpf_prog *prog, u64 *args)
+{
+	rcu_read_lock();
+	preempt_disable();
+	(void) BPF_PROG_RUN(prog, args);
+	preempt_enable();
+	rcu_read_unlock();
+}
+
+#define UNPACK(...)			__VA_ARGS__
+#define REPEAT_1(FN, DL, X, ...)	FN(X)
+#define REPEAT_2(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_1(FN, DL, __VA_ARGS__)
+#define REPEAT_3(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_2(FN, DL, __VA_ARGS__)
+#define REPEAT_4(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_3(FN, DL, __VA_ARGS__)
+#define REPEAT_5(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_4(FN, DL, __VA_ARGS__)
+#define REPEAT_6(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_5(FN, DL, __VA_ARGS__)
+#define REPEAT_7(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_6(FN, DL, __VA_ARGS__)
+#define REPEAT_8(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_7(FN, DL, __VA_ARGS__)
+#define REPEAT_9(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_8(FN, DL, __VA_ARGS__)
+#define REPEAT_10(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_9(FN, DL, __VA_ARGS__)
+#define REPEAT_11(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_10(FN, DL, __VA_ARGS__)
+#define REPEAT_12(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_11(FN, DL, __VA_ARGS__)
+#define REPEAT(X, FN, DL, ...)		REPEAT_##X(FN, DL, __VA_ARGS__)
+
+#define SARG(X)		u64 arg##X
+#define COPY(X)		args[X] = arg##X
+
+#define __DL_COM	(,)
+#define __DL_SEM	(;)
+
+#define __SEQ_0_11	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
+
+#define BPF_TRACE_DEFN_x(x)						\
+	void bpf_trace_run##x(struct bpf_prog *prog,			\
+			      REPEAT(x, SARG, __DL_COM, __SEQ_0_11))	\
+	{								\
+		u64 args[x];						\
+		REPEAT(x, COPY, __DL_SEM, __SEQ_0_11);			\
+		__bpf_trace_run(prog, args);				\
+	}								\
+	EXPORT_SYMBOL_GPL(bpf_trace_run##x)
+BPF_TRACE_DEFN_x(1);
+BPF_TRACE_DEFN_x(2);
+BPF_TRACE_DEFN_x(3);
+BPF_TRACE_DEFN_x(4);
+BPF_TRACE_DEFN_x(5);
+BPF_TRACE_DEFN_x(6);
+BPF_TRACE_DEFN_x(7);
+BPF_TRACE_DEFN_x(8);
+BPF_TRACE_DEFN_x(9);
+BPF_TRACE_DEFN_x(10);
+BPF_TRACE_DEFN_x(11);
+BPF_TRACE_DEFN_x(12);
+
+static int __bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog)
+{
+	unsigned long addr;
+	char buf[128];
+
+	/*
+	 * check that program doesn't access arguments beyond what's
+	 * available in this tracepoint
+	 */
+	if (prog->aux->max_ctx_offset > tp->num_args * sizeof(u64))
+		return -EINVAL;
+
+	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
+	addr = kallsyms_lookup_name(buf);
+	if (!addr)
+		return -ENOENT;
+
+	return tracepoint_probe_register(tp, (void *)addr, prog);
+}
+
+int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog)
+{
+	int err;
+
+	mutex_lock(&bpf_event_mutex);
+	err = __bpf_probe_register(tp, prog);
+	mutex_unlock(&bpf_event_mutex);
+	return err;
+}
+
+static int __bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog)
+{
+	unsigned long addr;
+	char buf[128];
+
+	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
+	addr = kallsyms_lookup_name(buf);
+	if (!addr)
+		return -ENOENT;
+
+	return tracepoint_probe_unregister(tp, (void *)addr, prog);
+}
+
+int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog)
+{
+	int err;
+
+	mutex_lock(&bpf_event_mutex);
+	err = __bpf_probe_unregister(tp, prog);
+	mutex_unlock(&bpf_event_mutex);
+	return err;
+}
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 09/11] libbpf: add bpf_raw_tracepoint_open helper
  2018-03-27  2:46 ` Alexei Starovoitov
@ 2018-03-27  2:47   ` Alexei Starovoitov
  -1 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:47 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

add bpf_raw_tracepoint_open(const char *name, int prog_fd) api to libbpf

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 tools/include/uapi/linux/bpf.h | 11 +++++++++++
 tools/lib/bpf/bpf.c            | 11 +++++++++++
 tools/lib/bpf/bpf.h            |  1 +
 3 files changed, 23 insertions(+)

diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index d245c41213ac..58060bec999d 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -94,6 +94,7 @@ enum bpf_cmd {
 	BPF_MAP_GET_FD_BY_ID,
 	BPF_OBJ_GET_INFO_BY_FD,
 	BPF_PROG_QUERY,
+	BPF_RAW_TRACEPOINT_OPEN,
 };
 
 enum bpf_map_type {
@@ -134,6 +135,7 @@ enum bpf_prog_type {
 	BPF_PROG_TYPE_SK_SKB,
 	BPF_PROG_TYPE_CGROUP_DEVICE,
 	BPF_PROG_TYPE_SK_MSG,
+	BPF_PROG_TYPE_RAW_TRACEPOINT,
 };
 
 enum bpf_attach_type {
@@ -344,6 +346,11 @@ union bpf_attr {
 		__aligned_u64	prog_ids;
 		__u32		prog_cnt;
 	} query;
+
+	struct {
+		__u64 name;
+		__u32 prog_fd;
+	} raw_tracepoint;
 } __attribute__((aligned(8)));
 
 /* BPF helper function descriptions:
@@ -1151,4 +1158,8 @@ struct bpf_cgroup_dev_ctx {
 	__u32 minor;
 };
 
+struct bpf_raw_tracepoint_args {
+	__u64 args[0];
+};
+
 #endif /* _UAPI__LINUX_BPF_H__ */
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index 592a58a2b681..e0500055f1a6 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -428,6 +428,17 @@ int bpf_obj_get_info_by_fd(int prog_fd, void *info, __u32 *info_len)
 	return err;
 }
 
+int bpf_raw_tracepoint_open(const char *name, int prog_fd)
+{
+	union bpf_attr attr;
+
+	bzero(&attr, sizeof(attr));
+	attr.raw_tracepoint.name = ptr_to_u64(name);
+	attr.raw_tracepoint.prog_fd = prog_fd;
+
+	return sys_bpf(BPF_RAW_TRACEPOINT_OPEN, &attr, sizeof(attr));
+}
+
 int bpf_set_link_xdp_fd(int ifindex, int fd, __u32 flags)
 {
 	struct sockaddr_nl sa;
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index 8d18fb73d7fb..ee59342c6f42 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -79,4 +79,5 @@ int bpf_map_get_fd_by_id(__u32 id);
 int bpf_obj_get_info_by_fd(int prog_fd, void *info, __u32 *info_len);
 int bpf_prog_query(int target_fd, enum bpf_attach_type type, __u32 query_flags,
 		   __u32 *attach_flags, __u32 *prog_ids, __u32 *prog_cnt);
+int bpf_raw_tracepoint_open(const char *name, int prog_fd);
 #endif
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 09/11] libbpf: add bpf_raw_tracepoint_open helper
@ 2018-03-27  2:47   ` Alexei Starovoitov
  0 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:47 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

add bpf_raw_tracepoint_open(const char *name, int prog_fd) api to libbpf

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 tools/include/uapi/linux/bpf.h | 11 +++++++++++
 tools/lib/bpf/bpf.c            | 11 +++++++++++
 tools/lib/bpf/bpf.h            |  1 +
 3 files changed, 23 insertions(+)

diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index d245c41213ac..58060bec999d 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -94,6 +94,7 @@ enum bpf_cmd {
 	BPF_MAP_GET_FD_BY_ID,
 	BPF_OBJ_GET_INFO_BY_FD,
 	BPF_PROG_QUERY,
+	BPF_RAW_TRACEPOINT_OPEN,
 };
 
 enum bpf_map_type {
@@ -134,6 +135,7 @@ enum bpf_prog_type {
 	BPF_PROG_TYPE_SK_SKB,
 	BPF_PROG_TYPE_CGROUP_DEVICE,
 	BPF_PROG_TYPE_SK_MSG,
+	BPF_PROG_TYPE_RAW_TRACEPOINT,
 };
 
 enum bpf_attach_type {
@@ -344,6 +346,11 @@ union bpf_attr {
 		__aligned_u64	prog_ids;
 		__u32		prog_cnt;
 	} query;
+
+	struct {
+		__u64 name;
+		__u32 prog_fd;
+	} raw_tracepoint;
 } __attribute__((aligned(8)));
 
 /* BPF helper function descriptions:
@@ -1151,4 +1158,8 @@ struct bpf_cgroup_dev_ctx {
 	__u32 minor;
 };
 
+struct bpf_raw_tracepoint_args {
+	__u64 args[0];
+};
+
 #endif /* _UAPI__LINUX_BPF_H__ */
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index 592a58a2b681..e0500055f1a6 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -428,6 +428,17 @@ int bpf_obj_get_info_by_fd(int prog_fd, void *info, __u32 *info_len)
 	return err;
 }
 
+int bpf_raw_tracepoint_open(const char *name, int prog_fd)
+{
+	union bpf_attr attr;
+
+	bzero(&attr, sizeof(attr));
+	attr.raw_tracepoint.name = ptr_to_u64(name);
+	attr.raw_tracepoint.prog_fd = prog_fd;
+
+	return sys_bpf(BPF_RAW_TRACEPOINT_OPEN, &attr, sizeof(attr));
+}
+
 int bpf_set_link_xdp_fd(int ifindex, int fd, __u32 flags)
 {
 	struct sockaddr_nl sa;
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index 8d18fb73d7fb..ee59342c6f42 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -79,4 +79,5 @@ int bpf_map_get_fd_by_id(__u32 id);
 int bpf_obj_get_info_by_fd(int prog_fd, void *info, __u32 *info_len);
 int bpf_prog_query(int target_fd, enum bpf_attach_type type, __u32 query_flags,
 		   __u32 *attach_flags, __u32 *prog_ids, __u32 *prog_cnt);
+int bpf_raw_tracepoint_open(const char *name, int prog_fd);
 #endif
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 10/11] samples/bpf: raw tracepoint test
  2018-03-27  2:46 ` Alexei Starovoitov
@ 2018-03-27  2:47   ` Alexei Starovoitov
  -1 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:47 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

add empty raw_tracepoint bpf program to test overhead similar
to kprobe and traditional tracepoint tests

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 samples/bpf/Makefile                    |  1 +
 samples/bpf/bpf_load.c                  | 14 ++++++++++++++
 samples/bpf/test_overhead_raw_tp_kern.c | 17 +++++++++++++++++
 samples/bpf/test_overhead_user.c        | 12 ++++++++++++
 4 files changed, 44 insertions(+)
 create mode 100644 samples/bpf/test_overhead_raw_tp_kern.c

diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
index 2c2a587e0942..4d6a6edd4bf6 100644
--- a/samples/bpf/Makefile
+++ b/samples/bpf/Makefile
@@ -119,6 +119,7 @@ always += offwaketime_kern.o
 always += spintest_kern.o
 always += map_perf_test_kern.o
 always += test_overhead_tp_kern.o
+always += test_overhead_raw_tp_kern.o
 always += test_overhead_kprobe_kern.o
 always += parse_varlen.o parse_simple.o parse_ldabs.o
 always += test_cgrp2_tc_kern.o
diff --git a/samples/bpf/bpf_load.c b/samples/bpf/bpf_load.c
index b1a310c3ae89..bebe4188b4b3 100644
--- a/samples/bpf/bpf_load.c
+++ b/samples/bpf/bpf_load.c
@@ -61,6 +61,7 @@ static int load_and_attach(const char *event, struct bpf_insn *prog, int size)
 	bool is_kprobe = strncmp(event, "kprobe/", 7) == 0;
 	bool is_kretprobe = strncmp(event, "kretprobe/", 10) == 0;
 	bool is_tracepoint = strncmp(event, "tracepoint/", 11) == 0;
+	bool is_raw_tracepoint = strncmp(event, "raw_tracepoint/", 15) == 0;
 	bool is_xdp = strncmp(event, "xdp", 3) == 0;
 	bool is_perf_event = strncmp(event, "perf_event", 10) == 0;
 	bool is_cgroup_skb = strncmp(event, "cgroup/skb", 10) == 0;
@@ -85,6 +86,8 @@ static int load_and_attach(const char *event, struct bpf_insn *prog, int size)
 		prog_type = BPF_PROG_TYPE_KPROBE;
 	} else if (is_tracepoint) {
 		prog_type = BPF_PROG_TYPE_TRACEPOINT;
+	} else if (is_raw_tracepoint) {
+		prog_type = BPF_PROG_TYPE_RAW_TRACEPOINT;
 	} else if (is_xdp) {
 		prog_type = BPF_PROG_TYPE_XDP;
 	} else if (is_perf_event) {
@@ -131,6 +134,16 @@ static int load_and_attach(const char *event, struct bpf_insn *prog, int size)
 		return populate_prog_array(event, fd);
 	}
 
+	if (is_raw_tracepoint) {
+		efd = bpf_raw_tracepoint_open(event + 15, fd);
+		if (efd < 0) {
+			printf("tracepoint %s %s\n", event + 15, strerror(errno));
+			return -1;
+		}
+		event_fd[prog_cnt - 1] = efd;
+		return 0;
+	}
+
 	if (is_kprobe || is_kretprobe) {
 		if (is_kprobe)
 			event += 7;
@@ -587,6 +600,7 @@ static int do_load_bpf_file(const char *path, fixup_map_cb fixup_map)
 		if (memcmp(shname, "kprobe/", 7) == 0 ||
 		    memcmp(shname, "kretprobe/", 10) == 0 ||
 		    memcmp(shname, "tracepoint/", 11) == 0 ||
+		    memcmp(shname, "raw_tracepoint/", 15) == 0 ||
 		    memcmp(shname, "xdp", 3) == 0 ||
 		    memcmp(shname, "perf_event", 10) == 0 ||
 		    memcmp(shname, "socket", 6) == 0 ||
diff --git a/samples/bpf/test_overhead_raw_tp_kern.c b/samples/bpf/test_overhead_raw_tp_kern.c
new file mode 100644
index 000000000000..d2af8bc1c805
--- /dev/null
+++ b/samples/bpf/test_overhead_raw_tp_kern.c
@@ -0,0 +1,17 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2018 Facebook */
+#include <uapi/linux/bpf.h>
+#include "bpf_helpers.h"
+
+SEC("raw_tracepoint/task_rename")
+int prog(struct bpf_raw_tracepoint_args *ctx)
+{
+	return 0;
+}
+
+SEC("raw_tracepoint/urandom_read")
+int prog2(struct bpf_raw_tracepoint_args *ctx)
+{
+	return 0;
+}
+char _license[] SEC("license") = "GPL";
diff --git a/samples/bpf/test_overhead_user.c b/samples/bpf/test_overhead_user.c
index d291167fd3c7..e1d35e07a10e 100644
--- a/samples/bpf/test_overhead_user.c
+++ b/samples/bpf/test_overhead_user.c
@@ -158,5 +158,17 @@ int main(int argc, char **argv)
 		unload_progs();
 	}
 
+	if (test_flags & 0xC0) {
+		snprintf(filename, sizeof(filename),
+			 "%s_raw_tp_kern.o", argv[0]);
+		if (load_bpf_file(filename)) {
+			printf("%s", bpf_log_buf);
+			return 1;
+		}
+		printf("w/RAW_TRACEPOINT\n");
+		run_perf_test(num_cpu, test_flags >> 6);
+		unload_progs();
+	}
+
 	return 0;
 }
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 10/11] samples/bpf: raw tracepoint test
@ 2018-03-27  2:47   ` Alexei Starovoitov
  0 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:47 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

add empty raw_tracepoint bpf program to test overhead similar
to kprobe and traditional tracepoint tests

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 samples/bpf/Makefile                    |  1 +
 samples/bpf/bpf_load.c                  | 14 ++++++++++++++
 samples/bpf/test_overhead_raw_tp_kern.c | 17 +++++++++++++++++
 samples/bpf/test_overhead_user.c        | 12 ++++++++++++
 4 files changed, 44 insertions(+)
 create mode 100644 samples/bpf/test_overhead_raw_tp_kern.c

diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
index 2c2a587e0942..4d6a6edd4bf6 100644
--- a/samples/bpf/Makefile
+++ b/samples/bpf/Makefile
@@ -119,6 +119,7 @@ always += offwaketime_kern.o
 always += spintest_kern.o
 always += map_perf_test_kern.o
 always += test_overhead_tp_kern.o
+always += test_overhead_raw_tp_kern.o
 always += test_overhead_kprobe_kern.o
 always += parse_varlen.o parse_simple.o parse_ldabs.o
 always += test_cgrp2_tc_kern.o
diff --git a/samples/bpf/bpf_load.c b/samples/bpf/bpf_load.c
index b1a310c3ae89..bebe4188b4b3 100644
--- a/samples/bpf/bpf_load.c
+++ b/samples/bpf/bpf_load.c
@@ -61,6 +61,7 @@ static int load_and_attach(const char *event, struct bpf_insn *prog, int size)
 	bool is_kprobe = strncmp(event, "kprobe/", 7) == 0;
 	bool is_kretprobe = strncmp(event, "kretprobe/", 10) == 0;
 	bool is_tracepoint = strncmp(event, "tracepoint/", 11) == 0;
+	bool is_raw_tracepoint = strncmp(event, "raw_tracepoint/", 15) == 0;
 	bool is_xdp = strncmp(event, "xdp", 3) == 0;
 	bool is_perf_event = strncmp(event, "perf_event", 10) == 0;
 	bool is_cgroup_skb = strncmp(event, "cgroup/skb", 10) == 0;
@@ -85,6 +86,8 @@ static int load_and_attach(const char *event, struct bpf_insn *prog, int size)
 		prog_type = BPF_PROG_TYPE_KPROBE;
 	} else if (is_tracepoint) {
 		prog_type = BPF_PROG_TYPE_TRACEPOINT;
+	} else if (is_raw_tracepoint) {
+		prog_type = BPF_PROG_TYPE_RAW_TRACEPOINT;
 	} else if (is_xdp) {
 		prog_type = BPF_PROG_TYPE_XDP;
 	} else if (is_perf_event) {
@@ -131,6 +134,16 @@ static int load_and_attach(const char *event, struct bpf_insn *prog, int size)
 		return populate_prog_array(event, fd);
 	}
 
+	if (is_raw_tracepoint) {
+		efd = bpf_raw_tracepoint_open(event + 15, fd);
+		if (efd < 0) {
+			printf("tracepoint %s %s\n", event + 15, strerror(errno));
+			return -1;
+		}
+		event_fd[prog_cnt - 1] = efd;
+		return 0;
+	}
+
 	if (is_kprobe || is_kretprobe) {
 		if (is_kprobe)
 			event += 7;
@@ -587,6 +600,7 @@ static int do_load_bpf_file(const char *path, fixup_map_cb fixup_map)
 		if (memcmp(shname, "kprobe/", 7) == 0 ||
 		    memcmp(shname, "kretprobe/", 10) == 0 ||
 		    memcmp(shname, "tracepoint/", 11) == 0 ||
+		    memcmp(shname, "raw_tracepoint/", 15) == 0 ||
 		    memcmp(shname, "xdp", 3) == 0 ||
 		    memcmp(shname, "perf_event", 10) == 0 ||
 		    memcmp(shname, "socket", 6) == 0 ||
diff --git a/samples/bpf/test_overhead_raw_tp_kern.c b/samples/bpf/test_overhead_raw_tp_kern.c
new file mode 100644
index 000000000000..d2af8bc1c805
--- /dev/null
+++ b/samples/bpf/test_overhead_raw_tp_kern.c
@@ -0,0 +1,17 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2018 Facebook */
+#include <uapi/linux/bpf.h>
+#include "bpf_helpers.h"
+
+SEC("raw_tracepoint/task_rename")
+int prog(struct bpf_raw_tracepoint_args *ctx)
+{
+	return 0;
+}
+
+SEC("raw_tracepoint/urandom_read")
+int prog2(struct bpf_raw_tracepoint_args *ctx)
+{
+	return 0;
+}
+char _license[] SEC("license") = "GPL";
diff --git a/samples/bpf/test_overhead_user.c b/samples/bpf/test_overhead_user.c
index d291167fd3c7..e1d35e07a10e 100644
--- a/samples/bpf/test_overhead_user.c
+++ b/samples/bpf/test_overhead_user.c
@@ -158,5 +158,17 @@ int main(int argc, char **argv)
 		unload_progs();
 	}
 
+	if (test_flags & 0xC0) {
+		snprintf(filename, sizeof(filename),
+			 "%s_raw_tp_kern.o", argv[0]);
+		if (load_bpf_file(filename)) {
+			printf("%s", bpf_log_buf);
+			return 1;
+		}
+		printf("w/RAW_TRACEPOINT\n");
+		run_perf_test(num_cpu, test_flags >> 6);
+		unload_progs();
+	}
+
 	return 0;
 }
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 11/11] selftests/bpf: test for bpf_get_stackid() from raw tracepoints
  2018-03-27  2:46 ` Alexei Starovoitov
@ 2018-03-27  2:47   ` Alexei Starovoitov
  -1 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:47 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

similar to traditional traceopint test add bpf_get_stackid() test
from raw tracepoints
and reduce verbosity of existing stackmap test

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 tools/testing/selftests/bpf/test_progs.c | 91 ++++++++++++++++++++++++--------
 1 file changed, 70 insertions(+), 21 deletions(-)

diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
index e9df48b306df..faadbe233966 100644
--- a/tools/testing/selftests/bpf/test_progs.c
+++ b/tools/testing/selftests/bpf/test_progs.c
@@ -877,7 +877,7 @@ static void test_stacktrace_map()
 
 	err = bpf_prog_load(file, BPF_PROG_TYPE_TRACEPOINT, &obj, &prog_fd);
 	if (CHECK(err, "prog_load", "err %d errno %d\n", err, errno))
-		goto out;
+		return;
 
 	/* Get the ID for the sched/sched_switch tracepoint */
 	snprintf(buf, sizeof(buf),
@@ -888,8 +888,7 @@ static void test_stacktrace_map()
 
 	bytes = read(efd, buf, sizeof(buf));
 	close(efd);
-	if (CHECK(bytes <= 0 || bytes >= sizeof(buf),
-		  "read", "bytes %d errno %d\n", bytes, errno))
+	if (bytes <= 0 || bytes >= sizeof(buf))
 		goto close_prog;
 
 	/* Open the perf event and attach bpf progrram */
@@ -906,29 +905,24 @@ static void test_stacktrace_map()
 		goto close_prog;
 
 	err = ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0);
-	if (CHECK(err, "perf_event_ioc_enable", "err %d errno %d\n",
-		  err, errno))
-		goto close_pmu;
+	if (err)
+		goto disable_pmu;
 
 	err = ioctl(pmu_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
-	if (CHECK(err, "perf_event_ioc_set_bpf", "err %d errno %d\n",
-		  err, errno))
+	if (err)
 		goto disable_pmu;
 
 	/* find map fds */
 	control_map_fd = bpf_find_map(__func__, obj, "control_map");
-	if (CHECK(control_map_fd < 0, "bpf_find_map control_map",
-		  "err %d errno %d\n", err, errno))
+	if (control_map_fd < 0)
 		goto disable_pmu;
 
 	stackid_hmap_fd = bpf_find_map(__func__, obj, "stackid_hmap");
-	if (CHECK(stackid_hmap_fd < 0, "bpf_find_map stackid_hmap",
-		  "err %d errno %d\n", err, errno))
+	if (stackid_hmap_fd < 0)
 		goto disable_pmu;
 
 	stackmap_fd = bpf_find_map(__func__, obj, "stackmap");
-	if (CHECK(stackmap_fd < 0, "bpf_find_map stackmap", "err %d errno %d\n",
-		  err, errno))
+	if (stackmap_fd < 0)
 		goto disable_pmu;
 
 	/* give some time for bpf program run */
@@ -945,24 +939,78 @@ static void test_stacktrace_map()
 	err = compare_map_keys(stackid_hmap_fd, stackmap_fd);
 	if (CHECK(err, "compare_map_keys stackid_hmap vs. stackmap",
 		  "err %d errno %d\n", err, errno))
-		goto disable_pmu;
+		goto disable_pmu_noerr;
 
 	err = compare_map_keys(stackmap_fd, stackid_hmap_fd);
 	if (CHECK(err, "compare_map_keys stackmap vs. stackid_hmap",
 		  "err %d errno %d\n", err, errno))
-		; /* fall through */
+		goto disable_pmu_noerr;
 
+	goto disable_pmu_noerr;
 disable_pmu:
+	error_cnt++;
+disable_pmu_noerr:
 	ioctl(pmu_fd, PERF_EVENT_IOC_DISABLE);
-
-close_pmu:
 	close(pmu_fd);
-
 close_prog:
 	bpf_object__close(obj);
+}
 
-out:
-	return;
+static void test_stacktrace_map_raw_tp()
+{
+	int control_map_fd, stackid_hmap_fd, stackmap_fd;
+	const char *file = "./test_stacktrace_map.o";
+	int efd, err, prog_fd;
+	__u32 key, val, duration = 0;
+	struct bpf_object *obj;
+
+	err = bpf_prog_load(file, BPF_PROG_TYPE_RAW_TRACEPOINT, &obj, &prog_fd);
+	if (CHECK(err, "prog_load raw tp", "err %d errno %d\n", err, errno))
+		return;
+
+	efd = bpf_raw_tracepoint_open("sched_switch", prog_fd);
+	if (CHECK(efd < 0, "raw_tp_open", "err %d errno %d\n", efd, errno))
+		goto close_prog;
+
+	/* find map fds */
+	control_map_fd = bpf_find_map(__func__, obj, "control_map");
+	if (control_map_fd < 0)
+		goto close_prog;
+
+	stackid_hmap_fd = bpf_find_map(__func__, obj, "stackid_hmap");
+	if (stackid_hmap_fd < 0)
+		goto close_prog;
+
+	stackmap_fd = bpf_find_map(__func__, obj, "stackmap");
+	if (stackmap_fd < 0)
+		goto close_prog;
+
+	/* give some time for bpf program run */
+	sleep(1);
+
+	/* disable stack trace collection */
+	key = 0;
+	val = 1;
+	bpf_map_update_elem(control_map_fd, &key, &val, 0);
+
+	/* for every element in stackid_hmap, we can find a corresponding one
+	 * in stackmap, and vise versa.
+	 */
+	err = compare_map_keys(stackid_hmap_fd, stackmap_fd);
+	if (CHECK(err, "compare_map_keys stackid_hmap vs. stackmap",
+		  "err %d errno %d\n", err, errno))
+		goto close_prog;
+
+	err = compare_map_keys(stackmap_fd, stackid_hmap_fd);
+	if (CHECK(err, "compare_map_keys stackmap vs. stackid_hmap",
+		  "err %d errno %d\n", err, errno))
+		goto close_prog;
+
+	goto close_prog_noerr;
+close_prog:
+	error_cnt++;
+close_prog_noerr:
+	bpf_object__close(obj);
 }
 
 static int extract_build_id(char *build_id, size_t size)
@@ -1138,6 +1186,7 @@ int main(void)
 	test_tp_attach_query();
 	test_stacktrace_map();
 	test_stacktrace_build_id();
+	test_stacktrace_map_raw_tp();
 
 	printf("Summary: %d PASSED, %d FAILED\n", pass_cnt, error_cnt);
 	return error_cnt ? EXIT_FAILURE : EXIT_SUCCESS;
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v6 bpf-next 11/11] selftests/bpf: test for bpf_get_stackid() from raw tracepoints
@ 2018-03-27  2:47   ` Alexei Starovoitov
  0 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27  2:47 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, peterz, rostedt, netdev, kernel-team, linux-api

From: Alexei Starovoitov <ast@kernel.org>

similar to traditional traceopint test add bpf_get_stackid() test
from raw tracepoints
and reduce verbosity of existing stackmap test

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 tools/testing/selftests/bpf/test_progs.c | 91 ++++++++++++++++++++++++--------
 1 file changed, 70 insertions(+), 21 deletions(-)

diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
index e9df48b306df..faadbe233966 100644
--- a/tools/testing/selftests/bpf/test_progs.c
+++ b/tools/testing/selftests/bpf/test_progs.c
@@ -877,7 +877,7 @@ static void test_stacktrace_map()
 
 	err = bpf_prog_load(file, BPF_PROG_TYPE_TRACEPOINT, &obj, &prog_fd);
 	if (CHECK(err, "prog_load", "err %d errno %d\n", err, errno))
-		goto out;
+		return;
 
 	/* Get the ID for the sched/sched_switch tracepoint */
 	snprintf(buf, sizeof(buf),
@@ -888,8 +888,7 @@ static void test_stacktrace_map()
 
 	bytes = read(efd, buf, sizeof(buf));
 	close(efd);
-	if (CHECK(bytes <= 0 || bytes >= sizeof(buf),
-		  "read", "bytes %d errno %d\n", bytes, errno))
+	if (bytes <= 0 || bytes >= sizeof(buf))
 		goto close_prog;
 
 	/* Open the perf event and attach bpf progrram */
@@ -906,29 +905,24 @@ static void test_stacktrace_map()
 		goto close_prog;
 
 	err = ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0);
-	if (CHECK(err, "perf_event_ioc_enable", "err %d errno %d\n",
-		  err, errno))
-		goto close_pmu;
+	if (err)
+		goto disable_pmu;
 
 	err = ioctl(pmu_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
-	if (CHECK(err, "perf_event_ioc_set_bpf", "err %d errno %d\n",
-		  err, errno))
+	if (err)
 		goto disable_pmu;
 
 	/* find map fds */
 	control_map_fd = bpf_find_map(__func__, obj, "control_map");
-	if (CHECK(control_map_fd < 0, "bpf_find_map control_map",
-		  "err %d errno %d\n", err, errno))
+	if (control_map_fd < 0)
 		goto disable_pmu;
 
 	stackid_hmap_fd = bpf_find_map(__func__, obj, "stackid_hmap");
-	if (CHECK(stackid_hmap_fd < 0, "bpf_find_map stackid_hmap",
-		  "err %d errno %d\n", err, errno))
+	if (stackid_hmap_fd < 0)
 		goto disable_pmu;
 
 	stackmap_fd = bpf_find_map(__func__, obj, "stackmap");
-	if (CHECK(stackmap_fd < 0, "bpf_find_map stackmap", "err %d errno %d\n",
-		  err, errno))
+	if (stackmap_fd < 0)
 		goto disable_pmu;
 
 	/* give some time for bpf program run */
@@ -945,24 +939,78 @@ static void test_stacktrace_map()
 	err = compare_map_keys(stackid_hmap_fd, stackmap_fd);
 	if (CHECK(err, "compare_map_keys stackid_hmap vs. stackmap",
 		  "err %d errno %d\n", err, errno))
-		goto disable_pmu;
+		goto disable_pmu_noerr;
 
 	err = compare_map_keys(stackmap_fd, stackid_hmap_fd);
 	if (CHECK(err, "compare_map_keys stackmap vs. stackid_hmap",
 		  "err %d errno %d\n", err, errno))
-		; /* fall through */
+		goto disable_pmu_noerr;
 
+	goto disable_pmu_noerr;
 disable_pmu:
+	error_cnt++;
+disable_pmu_noerr:
 	ioctl(pmu_fd, PERF_EVENT_IOC_DISABLE);
-
-close_pmu:
 	close(pmu_fd);
-
 close_prog:
 	bpf_object__close(obj);
+}
 
-out:
-	return;
+static void test_stacktrace_map_raw_tp()
+{
+	int control_map_fd, stackid_hmap_fd, stackmap_fd;
+	const char *file = "./test_stacktrace_map.o";
+	int efd, err, prog_fd;
+	__u32 key, val, duration = 0;
+	struct bpf_object *obj;
+
+	err = bpf_prog_load(file, BPF_PROG_TYPE_RAW_TRACEPOINT, &obj, &prog_fd);
+	if (CHECK(err, "prog_load raw tp", "err %d errno %d\n", err, errno))
+		return;
+
+	efd = bpf_raw_tracepoint_open("sched_switch", prog_fd);
+	if (CHECK(efd < 0, "raw_tp_open", "err %d errno %d\n", efd, errno))
+		goto close_prog;
+
+	/* find map fds */
+	control_map_fd = bpf_find_map(__func__, obj, "control_map");
+	if (control_map_fd < 0)
+		goto close_prog;
+
+	stackid_hmap_fd = bpf_find_map(__func__, obj, "stackid_hmap");
+	if (stackid_hmap_fd < 0)
+		goto close_prog;
+
+	stackmap_fd = bpf_find_map(__func__, obj, "stackmap");
+	if (stackmap_fd < 0)
+		goto close_prog;
+
+	/* give some time for bpf program run */
+	sleep(1);
+
+	/* disable stack trace collection */
+	key = 0;
+	val = 1;
+	bpf_map_update_elem(control_map_fd, &key, &val, 0);
+
+	/* for every element in stackid_hmap, we can find a corresponding one
+	 * in stackmap, and vise versa.
+	 */
+	err = compare_map_keys(stackid_hmap_fd, stackmap_fd);
+	if (CHECK(err, "compare_map_keys stackid_hmap vs. stackmap",
+		  "err %d errno %d\n", err, errno))
+		goto close_prog;
+
+	err = compare_map_keys(stackmap_fd, stackid_hmap_fd);
+	if (CHECK(err, "compare_map_keys stackmap vs. stackid_hmap",
+		  "err %d errno %d\n", err, errno))
+		goto close_prog;
+
+	goto close_prog_noerr;
+close_prog:
+	error_cnt++;
+close_prog_noerr:
+	bpf_object__close(obj);
 }
 
 static int extract_build_id(char *build_id, size_t size)
@@ -1138,6 +1186,7 @@ int main(void)
 	test_tp_attach_query();
 	test_stacktrace_map();
 	test_stacktrace_build_id();
+	test_stacktrace_map_raw_tp();
 
 	printf("Summary: %d PASSED, %d FAILED\n", pass_cnt, error_cnt);
 	return error_cnt ? EXIT_FAILURE : EXIT_SUCCESS;
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 07/11] tracepoint: introduce kernel_tracepoint_find_by_name
  2018-03-27  2:47   ` Alexei Starovoitov
@ 2018-03-27 14:07     ` Steven Rostedt
  -1 siblings, 0 replies; 57+ messages in thread
From: Steven Rostedt @ 2018-03-27 14:07 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers

On Mon, 26 Mar 2018 19:47:02 -0700
Alexei Starovoitov <ast@fb.com> wrote:

> From: Alexei Starovoitov <ast@kernel.org>
> 
> introduce kernel_tracepoint_find_by_name() helper to let bpf core
> find tracepoint by name and later attach bpf probe to a tracepoint
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Thanks for doing this Alexei!

One nit below.


> ---
>  include/linux/tracepoint.h | 6 ++++++
>  kernel/tracepoint.c        | 9 +++++++++
>  2 files changed, 15 insertions(+)
> 
> diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
> index c92f4adbc0d7..a00b84473211 100644
> --- a/include/linux/tracepoint.h
> +++ b/include/linux/tracepoint.h
> @@ -43,6 +43,12 @@ tracepoint_probe_unregister(struct tracepoint *tp, void *probe, void *data);
>  extern void
>  for_each_kernel_tracepoint(void (*fct)(struct tracepoint *tp, void *priv),
>  		void *priv);
> +#ifdef CONFIG_TRACEPOINTS
> +struct tracepoint *kernel_tracepoint_find_by_name(const char *name);
> +#else
> +static inline struct tracepoint *
> +kernel_tracepoint_find_by_name(const char *name) { return NULL; }
> +#endif
>  
>  #ifdef CONFIG_MODULES
>  struct tp_module {
> diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
> index 671b13457387..e2a9a0391ae2 100644
> --- a/kernel/tracepoint.c
> +++ b/kernel/tracepoint.c
> @@ -528,6 +528,15 @@ void for_each_kernel_tracepoint(void (*fct)(struct tracepoint *tp, void *priv),
>  }
>  EXPORT_SYMBOL_GPL(for_each_kernel_tracepoint);
>  
> +struct tracepoint *kernel_tracepoint_find_by_name(const char *name)
> +{
> +	struct tracepoint * const *tp = __start___tracepoints_ptrs;
> +
> +	for (; tp < __stop___tracepoints_ptrs; tp++)
> +		if (!strcmp((*tp)->name, name))
> +			return *tp;


Usually for cases like this, we prefer to add brackets for the for
block, as it's not a single line below it.

	for (; tp < __stop__tracepoints_ptrs; tp++) {
		if (!strcmp((*tp)->name, name))
			return *tp;
	}

-- Steve


	

> +	return NULL;
> +}
>  #ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
>  
>  /* NB: reg/unreg are called while guarded with the tracepoints_mutex */

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 07/11] tracepoint: introduce kernel_tracepoint_find_by_name
@ 2018-03-27 14:07     ` Steven Rostedt
  0 siblings, 0 replies; 57+ messages in thread
From: Steven Rostedt @ 2018-03-27 14:07 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers

On Mon, 26 Mar 2018 19:47:02 -0700
Alexei Starovoitov <ast@fb.com> wrote:

> From: Alexei Starovoitov <ast@kernel.org>
> 
> introduce kernel_tracepoint_find_by_name() helper to let bpf core
> find tracepoint by name and later attach bpf probe to a tracepoint
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Thanks for doing this Alexei!

One nit below.


> ---
>  include/linux/tracepoint.h | 6 ++++++
>  kernel/tracepoint.c        | 9 +++++++++
>  2 files changed, 15 insertions(+)
> 
> diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
> index c92f4adbc0d7..a00b84473211 100644
> --- a/include/linux/tracepoint.h
> +++ b/include/linux/tracepoint.h
> @@ -43,6 +43,12 @@ tracepoint_probe_unregister(struct tracepoint *tp, void *probe, void *data);
>  extern void
>  for_each_kernel_tracepoint(void (*fct)(struct tracepoint *tp, void *priv),
>  		void *priv);
> +#ifdef CONFIG_TRACEPOINTS
> +struct tracepoint *kernel_tracepoint_find_by_name(const char *name);
> +#else
> +static inline struct tracepoint *
> +kernel_tracepoint_find_by_name(const char *name) { return NULL; }
> +#endif
>  
>  #ifdef CONFIG_MODULES
>  struct tp_module {
> diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
> index 671b13457387..e2a9a0391ae2 100644
> --- a/kernel/tracepoint.c
> +++ b/kernel/tracepoint.c
> @@ -528,6 +528,15 @@ void for_each_kernel_tracepoint(void (*fct)(struct tracepoint *tp, void *priv),
>  }
>  EXPORT_SYMBOL_GPL(for_each_kernel_tracepoint);
>  
> +struct tracepoint *kernel_tracepoint_find_by_name(const char *name)
> +{
> +	struct tracepoint * const *tp = __start___tracepoints_ptrs;
> +
> +	for (; tp < __stop___tracepoints_ptrs; tp++)
> +		if (!strcmp((*tp)->name, name))
> +			return *tp;


Usually for cases like this, we prefer to add brackets for the for
block, as it's not a single line below it.

	for (; tp < __stop__tracepoints_ptrs; tp++) {
		if (!strcmp((*tp)->name, name))
			return *tp;
	}

-- Steve


	

> +	return NULL;
> +}
>  #ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
>  
>  /* NB: reg/unreg are called while guarded with the tracepoints_mutex */

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 07/11] tracepoint: introduce kernel_tracepoint_find_by_name
  2018-03-27 14:07     ` Steven Rostedt
  (?)
@ 2018-03-27 14:18     ` Mathieu Desnoyers
  2018-03-27 14:42       ` Steven Rostedt
  -1 siblings, 1 reply; 57+ messages in thread
From: Mathieu Desnoyers @ 2018-03-27 14:18 UTC (permalink / raw)
  To: rostedt
  Cc: Alexei Starovoitov, David S. Miller, Daniel Borkmann,
	Linus Torvalds, Peter Zijlstra, netdev, kernel-team, linux-api

----- On Mar 27, 2018, at 10:07 AM, rostedt rostedt@goodmis.org wrote:

> On Mon, 26 Mar 2018 19:47:02 -0700
> Alexei Starovoitov <ast@fb.com> wrote:
> 
>> From: Alexei Starovoitov <ast@kernel.org>
>> 
>> introduce kernel_tracepoint_find_by_name() helper to let bpf core
>> find tracepoint by name and later attach bpf probe to a tracepoint
>> 
>> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> 
> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Steven showed preference for tracepoint_kernel_find_by_name() at some
point (starting with a tracepoint_ prefix). I'm find with either of
the names.

Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>

Thanks,

Mathieu

> 
> Thanks for doing this Alexei!
> 
> One nit below.
> 
> 
>> ---
>>  include/linux/tracepoint.h | 6 ++++++
>>  kernel/tracepoint.c        | 9 +++++++++
>>  2 files changed, 15 insertions(+)
>> 
>> diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
>> index c92f4adbc0d7..a00b84473211 100644
>> --- a/include/linux/tracepoint.h
>> +++ b/include/linux/tracepoint.h
>> @@ -43,6 +43,12 @@ tracepoint_probe_unregister(struct tracepoint *tp, void
>> *probe, void *data);
>>  extern void
>>  for_each_kernel_tracepoint(void (*fct)(struct tracepoint *tp, void *priv),
>>  		void *priv);
>> +#ifdef CONFIG_TRACEPOINTS
>> +struct tracepoint *kernel_tracepoint_find_by_name(const char *name);
>> +#else
>> +static inline struct tracepoint *
>> +kernel_tracepoint_find_by_name(const char *name) { return NULL; }
>> +#endif
>>  
>>  #ifdef CONFIG_MODULES
>>  struct tp_module {
>> diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
>> index 671b13457387..e2a9a0391ae2 100644
>> --- a/kernel/tracepoint.c
>> +++ b/kernel/tracepoint.c
>> @@ -528,6 +528,15 @@ void for_each_kernel_tracepoint(void (*fct)(struct
>> tracepoint *tp, void *priv),
>>  }
>>  EXPORT_SYMBOL_GPL(for_each_kernel_tracepoint);
>>  
>> +struct tracepoint *kernel_tracepoint_find_by_name(const char *name)
>> +{
>> +	struct tracepoint * const *tp = __start___tracepoints_ptrs;
>> +
>> +	for (; tp < __stop___tracepoints_ptrs; tp++)
>> +		if (!strcmp((*tp)->name, name))
>> +			return *tp;
> 
> 
> Usually for cases like this, we prefer to add brackets for the for
> block, as it's not a single line below it.
> 
>	for (; tp < __stop__tracepoints_ptrs; tp++) {
>		if (!strcmp((*tp)->name, name))
>			return *tp;
>	}
> 
> -- Steve
> 
> 
>	
> 
>> +	return NULL;
>> +}
>>  #ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
>>  
> >  /* NB: reg/unreg are called while guarded with the tracepoints_mutex */

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 07/11] tracepoint: introduce kernel_tracepoint_find_by_name
  2018-03-27 14:18     ` Mathieu Desnoyers
@ 2018-03-27 14:42       ` Steven Rostedt
  2018-03-27 15:53         ` Alexei Starovoitov
  0 siblings, 1 reply; 57+ messages in thread
From: Steven Rostedt @ 2018-03-27 14:42 UTC (permalink / raw)
  To: Mathieu Desnoyers
  Cc: Alexei Starovoitov, David S. Miller, Daniel Borkmann,
	Linus Torvalds, Peter Zijlstra, netdev, kernel-team, linux-api

On Tue, 27 Mar 2018 10:18:24 -0400 (EDT)
Mathieu Desnoyers <mathieu.desnoyers@efficios.com> wrote:

> ----- On Mar 27, 2018, at 10:07 AM, rostedt rostedt@goodmis.org wrote:
> 
> > On Mon, 26 Mar 2018 19:47:02 -0700
> > Alexei Starovoitov <ast@fb.com> wrote:
> >   
> >> From: Alexei Starovoitov <ast@kernel.org>
> >> 
> >> introduce kernel_tracepoint_find_by_name() helper to let bpf core
> >> find tracepoint by name and later attach bpf probe to a tracepoint
> >> 
> >> Signed-off-by: Alexei Starovoitov <ast@kernel.org>  
> > 
> > Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>  
> 
> Steven showed preference for tracepoint_kernel_find_by_name() at some
> point (starting with a tracepoint_ prefix). I'm find with either of
> the names.

Yeah, I do prefer tracepoint_kernel_find_by_name() to stay consistent
with the other tracepoint functions. But we have
"for_each_kernel_tracepoint()" and not "for_each_tracepoint_kernel()",
thus we need to pick being consistent with one or the other. One answer
is to use tracpoint_kernel_find_by_name() and rename the for_each to
for_each_tracpoint_kernel().

-- Steve


> 
> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
> 
> Thanks,
> 
> Mathieu
> 
> > 
> > Thanks for doing this Alexei!
> > 
> > One nit below.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 06/11] tracepoint: compute num_args at build time
  2018-03-27  2:47   ` Alexei Starovoitov
@ 2018-03-27 15:15     ` Steven Rostedt
  -1 siblings, 0 replies; 57+ messages in thread
From: Steven Rostedt @ 2018-03-27 15:15 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers

On Mon, 26 Mar 2018 19:47:01 -0700
Alexei Starovoitov <ast@fb.com> wrote:

> From: Alexei Starovoitov <ast@kernel.org>
> 
> compute number of arguments passed into tracepoint
> at compile time and store it as part of 'struct tracepoint'.
> The number is necessary to check safety of bpf program access that
> is coming in subsequent patch.
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

-- Steve

> ---
>  include/linux/tracepoint-defs.h |  1 +
>  include/linux/tracepoint.h      | 12 ++++++------
>  include/trace/define_trace.h    | 14 +++++++-------
>  3 files changed, 14 insertions(+), 13 deletions(-)
> 
> diff --git a/include/linux/tracepoint-defs.h b/include/linux/tracepoint-defs.h
> index 64ed7064f1fa..39a283c61c51 100644
> --- a/include/linux/tracepoint-defs.h
> +++ b/include/linux/tracepoint-defs.h
> @@ -33,6 +33,7 @@ struct tracepoint {
>  	int (*regfunc)(void);
>  	void (*unregfunc)(void);
>  	struct tracepoint_func __rcu *funcs;
> +	u32 num_args;
>  };
>  
>  #endif
> diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
> index c94f466d57ef..c92f4adbc0d7 100644
> --- a/include/linux/tracepoint.h
> +++ b/include/linux/tracepoint.h
> @@ -230,18 +230,18 @@ extern void syscall_unregfunc(void);
>   * structures, so we create an array of pointers that will be used for iteration
>   * on the tracepoints.
>   */
> -#define DEFINE_TRACE_FN(name, reg, unreg)				 \
> +#define DEFINE_TRACE_FN(name, reg, unreg, num_args)			 \
>  	static const char __tpstrtab_##name[]				 \
>  	__attribute__((section("__tracepoints_strings"))) = #name;	 \
>  	struct tracepoint __tracepoint_##name				 \
>  	__attribute__((section("__tracepoints"))) =			 \
> -		{ __tpstrtab_##name, STATIC_KEY_INIT_FALSE, reg, unreg, NULL };\
> +		{ __tpstrtab_##name, STATIC_KEY_INIT_FALSE, reg, unreg, NULL, num_args };\
>  	static struct tracepoint * const __tracepoint_ptr_##name __used	 \
>  	__attribute__((section("__tracepoints_ptrs"))) =		 \
>  		&__tracepoint_##name;
>  
> -#define DEFINE_TRACE(name)						\
> -	DEFINE_TRACE_FN(name, NULL, NULL);
> +#define DEFINE_TRACE(name, num_args)					\
> +	DEFINE_TRACE_FN(name, NULL, NULL, num_args);
>  
>  #define EXPORT_TRACEPOINT_SYMBOL_GPL(name)				\
>  	EXPORT_SYMBOL_GPL(__tracepoint_##name)
> @@ -275,8 +275,8 @@ extern void syscall_unregfunc(void);
>  		return false;						\
>  	}
>  
> -#define DEFINE_TRACE_FN(name, reg, unreg)
> -#define DEFINE_TRACE(name)
> +#define DEFINE_TRACE_FN(name, reg, unreg, num_args)
> +#define DEFINE_TRACE(name, num_args)
>  #define EXPORT_TRACEPOINT_SYMBOL_GPL(name)
>  #define EXPORT_TRACEPOINT_SYMBOL(name)
>  
> diff --git a/include/trace/define_trace.h b/include/trace/define_trace.h
> index d9e3d4aa3f6e..96b22ace9ae7 100644
> --- a/include/trace/define_trace.h
> +++ b/include/trace/define_trace.h
> @@ -25,7 +25,7 @@
>  
>  #undef TRACE_EVENT
>  #define TRACE_EVENT(name, proto, args, tstruct, assign, print)	\
> -	DEFINE_TRACE(name)
> +	DEFINE_TRACE(name, COUNT_ARGS(args))
>  
>  #undef TRACE_EVENT_CONDITION
>  #define TRACE_EVENT_CONDITION(name, proto, args, cond, tstruct, assign, print) \
> @@ -39,24 +39,24 @@
>  #undef TRACE_EVENT_FN
>  #define TRACE_EVENT_FN(name, proto, args, tstruct,		\
>  		assign, print, reg, unreg)			\
> -	DEFINE_TRACE_FN(name, reg, unreg)
> +	DEFINE_TRACE_FN(name, reg, unreg, COUNT_ARGS(args))
>  
>  #undef TRACE_EVENT_FN_COND
>  #define TRACE_EVENT_FN_COND(name, proto, args, cond, tstruct,		\
>  		assign, print, reg, unreg)			\
> -	DEFINE_TRACE_FN(name, reg, unreg)
> +	DEFINE_TRACE_FN(name, reg, unreg, COUNT_ARGS(args))
>  
>  #undef DEFINE_EVENT
>  #define DEFINE_EVENT(template, name, proto, args) \
> -	DEFINE_TRACE(name)
> +	DEFINE_TRACE(name, COUNT_ARGS(args))
>  
>  #undef DEFINE_EVENT_FN
>  #define DEFINE_EVENT_FN(template, name, proto, args, reg, unreg) \
> -	DEFINE_TRACE_FN(name, reg, unreg)
> +	DEFINE_TRACE_FN(name, reg, unreg, COUNT_ARGS(args))
>  
>  #undef DEFINE_EVENT_PRINT
>  #define DEFINE_EVENT_PRINT(template, name, proto, args, print)	\
> -	DEFINE_TRACE(name)
> +	DEFINE_TRACE(name, COUNT_ARGS(args))
>  
>  #undef DEFINE_EVENT_CONDITION
>  #define DEFINE_EVENT_CONDITION(template, name, proto, args, cond) \
> @@ -64,7 +64,7 @@
>  
>  #undef DECLARE_TRACE
>  #define DECLARE_TRACE(name, proto, args)	\
> -	DEFINE_TRACE(name)
> +	DEFINE_TRACE(name, COUNT_ARGS(args))
>  
>  #undef TRACE_INCLUDE
>  #undef __TRACE_INCLUDE

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 06/11] tracepoint: compute num_args at build time
@ 2018-03-27 15:15     ` Steven Rostedt
  0 siblings, 0 replies; 57+ messages in thread
From: Steven Rostedt @ 2018-03-27 15:15 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers

On Mon, 26 Mar 2018 19:47:01 -0700
Alexei Starovoitov <ast@fb.com> wrote:

> From: Alexei Starovoitov <ast@kernel.org>
> 
> compute number of arguments passed into tracepoint
> at compile time and store it as part of 'struct tracepoint'.
> The number is necessary to check safety of bpf program access that
> is coming in subsequent patch.
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

-- Steve

> ---
>  include/linux/tracepoint-defs.h |  1 +
>  include/linux/tracepoint.h      | 12 ++++++------
>  include/trace/define_trace.h    | 14 +++++++-------
>  3 files changed, 14 insertions(+), 13 deletions(-)
> 
> diff --git a/include/linux/tracepoint-defs.h b/include/linux/tracepoint-defs.h
> index 64ed7064f1fa..39a283c61c51 100644
> --- a/include/linux/tracepoint-defs.h
> +++ b/include/linux/tracepoint-defs.h
> @@ -33,6 +33,7 @@ struct tracepoint {
>  	int (*regfunc)(void);
>  	void (*unregfunc)(void);
>  	struct tracepoint_func __rcu *funcs;
> +	u32 num_args;
>  };
>  
>  #endif
> diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
> index c94f466d57ef..c92f4adbc0d7 100644
> --- a/include/linux/tracepoint.h
> +++ b/include/linux/tracepoint.h
> @@ -230,18 +230,18 @@ extern void syscall_unregfunc(void);
>   * structures, so we create an array of pointers that will be used for iteration
>   * on the tracepoints.
>   */
> -#define DEFINE_TRACE_FN(name, reg, unreg)				 \
> +#define DEFINE_TRACE_FN(name, reg, unreg, num_args)			 \
>  	static const char __tpstrtab_##name[]				 \
>  	__attribute__((section("__tracepoints_strings"))) = #name;	 \
>  	struct tracepoint __tracepoint_##name				 \
>  	__attribute__((section("__tracepoints"))) =			 \
> -		{ __tpstrtab_##name, STATIC_KEY_INIT_FALSE, reg, unreg, NULL };\
> +		{ __tpstrtab_##name, STATIC_KEY_INIT_FALSE, reg, unreg, NULL, num_args };\
>  	static struct tracepoint * const __tracepoint_ptr_##name __used	 \
>  	__attribute__((section("__tracepoints_ptrs"))) =		 \
>  		&__tracepoint_##name;
>  
> -#define DEFINE_TRACE(name)						\
> -	DEFINE_TRACE_FN(name, NULL, NULL);
> +#define DEFINE_TRACE(name, num_args)					\
> +	DEFINE_TRACE_FN(name, NULL, NULL, num_args);
>  
>  #define EXPORT_TRACEPOINT_SYMBOL_GPL(name)				\
>  	EXPORT_SYMBOL_GPL(__tracepoint_##name)
> @@ -275,8 +275,8 @@ extern void syscall_unregfunc(void);
>  		return false;						\
>  	}
>  
> -#define DEFINE_TRACE_FN(name, reg, unreg)
> -#define DEFINE_TRACE(name)
> +#define DEFINE_TRACE_FN(name, reg, unreg, num_args)
> +#define DEFINE_TRACE(name, num_args)
>  #define EXPORT_TRACEPOINT_SYMBOL_GPL(name)
>  #define EXPORT_TRACEPOINT_SYMBOL(name)
>  
> diff --git a/include/trace/define_trace.h b/include/trace/define_trace.h
> index d9e3d4aa3f6e..96b22ace9ae7 100644
> --- a/include/trace/define_trace.h
> +++ b/include/trace/define_trace.h
> @@ -25,7 +25,7 @@
>  
>  #undef TRACE_EVENT
>  #define TRACE_EVENT(name, proto, args, tstruct, assign, print)	\
> -	DEFINE_TRACE(name)
> +	DEFINE_TRACE(name, COUNT_ARGS(args))
>  
>  #undef TRACE_EVENT_CONDITION
>  #define TRACE_EVENT_CONDITION(name, proto, args, cond, tstruct, assign, print) \
> @@ -39,24 +39,24 @@
>  #undef TRACE_EVENT_FN
>  #define TRACE_EVENT_FN(name, proto, args, tstruct,		\
>  		assign, print, reg, unreg)			\
> -	DEFINE_TRACE_FN(name, reg, unreg)
> +	DEFINE_TRACE_FN(name, reg, unreg, COUNT_ARGS(args))
>  
>  #undef TRACE_EVENT_FN_COND
>  #define TRACE_EVENT_FN_COND(name, proto, args, cond, tstruct,		\
>  		assign, print, reg, unreg)			\
> -	DEFINE_TRACE_FN(name, reg, unreg)
> +	DEFINE_TRACE_FN(name, reg, unreg, COUNT_ARGS(args))
>  
>  #undef DEFINE_EVENT
>  #define DEFINE_EVENT(template, name, proto, args) \
> -	DEFINE_TRACE(name)
> +	DEFINE_TRACE(name, COUNT_ARGS(args))
>  
>  #undef DEFINE_EVENT_FN
>  #define DEFINE_EVENT_FN(template, name, proto, args, reg, unreg) \
> -	DEFINE_TRACE_FN(name, reg, unreg)
> +	DEFINE_TRACE_FN(name, reg, unreg, COUNT_ARGS(args))
>  
>  #undef DEFINE_EVENT_PRINT
>  #define DEFINE_EVENT_PRINT(template, name, proto, args, print)	\
> -	DEFINE_TRACE(name)
> +	DEFINE_TRACE(name, COUNT_ARGS(args))
>  
>  #undef DEFINE_EVENT_CONDITION
>  #define DEFINE_EVENT_CONDITION(template, name, proto, args, cond) \
> @@ -64,7 +64,7 @@
>  
>  #undef DECLARE_TRACE
>  #define DECLARE_TRACE(name, proto, args)	\
> -	DEFINE_TRACE(name)
> +	DEFINE_TRACE(name, COUNT_ARGS(args))
>  
>  #undef TRACE_INCLUDE
>  #undef __TRACE_INCLUDE

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 07/11] tracepoint: introduce kernel_tracepoint_find_by_name
  2018-03-27 14:42       ` Steven Rostedt
@ 2018-03-27 15:53         ` Alexei Starovoitov
  2018-03-27 16:09           ` Mathieu Desnoyers
  2018-03-27 16:36           ` Daniel Borkmann
  0 siblings, 2 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27 15:53 UTC (permalink / raw)
  To: Steven Rostedt, Mathieu Desnoyers
  Cc: David S. Miller, Daniel Borkmann, Linus Torvalds, Peter Zijlstra,
	netdev, kernel-team, linux-api

On 3/27/18 7:42 AM, Steven Rostedt wrote:
> On Tue, 27 Mar 2018 10:18:24 -0400 (EDT)
> Mathieu Desnoyers <mathieu.desnoyers@efficios.com> wrote:
>
>> ----- On Mar 27, 2018, at 10:07 AM, rostedt rostedt@goodmis.org wrote:
>>
>>> On Mon, 26 Mar 2018 19:47:02 -0700
>>> Alexei Starovoitov <ast@fb.com> wrote:
>>>
>>>> From: Alexei Starovoitov <ast@kernel.org>
>>>>
>>>> introduce kernel_tracepoint_find_by_name() helper to let bpf core
>>>> find tracepoint by name and later attach bpf probe to a tracepoint
>>>>
>>>> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
>>>
>>> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
>>
>> Steven showed preference for tracepoint_kernel_find_by_name() at some
>> point (starting with a tracepoint_ prefix). I'm find with either of
>> the names.
>
> Yeah, I do prefer tracepoint_kernel_find_by_name() to stay consistent
> with the other tracepoint functions. But we have
> "for_each_kernel_tracepoint()" and not "for_each_tracepoint_kernel()",
> thus we need to pick being consistent with one or the other. One answer
> is to use tracpoint_kernel_find_by_name() and rename the for_each to
> for_each_tracpoint_kernel().

yep. that's exactly the reason I picked kernel_tracepoint_find_by_name()
to match for_each_kernel_tracepoint() naming.

I can certainly send a follow up patch to rename both to
*tracepoint_kernel* and then you can nack it because it breaks lttng :)
but let's do it in a separate thread.

Daniel,
do you mind adding { } as Steven requested while applying or
you want me to resubmit the whole thing?

Thanks!

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 07/11] tracepoint: introduce kernel_tracepoint_find_by_name
  2018-03-27 15:53         ` Alexei Starovoitov
@ 2018-03-27 16:09           ` Mathieu Desnoyers
  2018-03-27 16:36           ` Daniel Borkmann
  1 sibling, 0 replies; 57+ messages in thread
From: Mathieu Desnoyers @ 2018-03-27 16:09 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: rostedt, David S. Miller, Daniel Borkmann, Linus Torvalds,
	Peter Zijlstra, netdev, kernel-team, linux-api

----- On Mar 27, 2018, at 11:53 AM, Alexei Starovoitov ast@fb.com wrote:

> On 3/27/18 7:42 AM, Steven Rostedt wrote:
>> On Tue, 27 Mar 2018 10:18:24 -0400 (EDT)
>> Mathieu Desnoyers <mathieu.desnoyers@efficios.com> wrote:
>>
>>> ----- On Mar 27, 2018, at 10:07 AM, rostedt rostedt@goodmis.org wrote:
>>>
>>>> On Mon, 26 Mar 2018 19:47:02 -0700
>>>> Alexei Starovoitov <ast@fb.com> wrote:
>>>>
>>>>> From: Alexei Starovoitov <ast@kernel.org>
>>>>>
>>>>> introduce kernel_tracepoint_find_by_name() helper to let bpf core
>>>>> find tracepoint by name and later attach bpf probe to a tracepoint
>>>>>
>>>>> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
>>>>
>>>> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
>>>
>>> Steven showed preference for tracepoint_kernel_find_by_name() at some
>>> point (starting with a tracepoint_ prefix). I'm find with either of
>>> the names.
>>
>> Yeah, I do prefer tracepoint_kernel_find_by_name() to stay consistent
>> with the other tracepoint functions. But we have
>> "for_each_kernel_tracepoint()" and not "for_each_tracepoint_kernel()",
>> thus we need to pick being consistent with one or the other. One answer
>> is to use tracpoint_kernel_find_by_name() and rename the for_each to
>> for_each_tracpoint_kernel().
> 
> yep. that's exactly the reason I picked kernel_tracepoint_find_by_name()
> to match for_each_kernel_tracepoint() naming.
> 
> I can certainly send a follow up patch to rename both to
> *tracepoint_kernel* and then you can nack it because it breaks lttng :)

If Steven prefers changing the name of for_each_kernel_tracepoint() to
for_each_tracepoint_kernel(), I'll adapt LTTng accordingly. I don't
mind either way, as long as the change is justified.

Thanks,

Mathieu


> but let's do it in a separate thread.
> 
> Daniel,
> do you mind adding { } as Steven requested while applying or
> you want me to resubmit the whole thing?
> 
> Thanks!

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 07/11] tracepoint: introduce kernel_tracepoint_find_by_name
  2018-03-27 15:53         ` Alexei Starovoitov
  2018-03-27 16:09           ` Mathieu Desnoyers
@ 2018-03-27 16:36           ` Daniel Borkmann
  1 sibling, 0 replies; 57+ messages in thread
From: Daniel Borkmann @ 2018-03-27 16:36 UTC (permalink / raw)
  To: Alexei Starovoitov, Steven Rostedt, Mathieu Desnoyers
  Cc: David S. Miller, Linus Torvalds, Peter Zijlstra, netdev,
	kernel-team, linux-api

On 03/27/2018 05:53 PM, Alexei Starovoitov wrote:
> On 3/27/18 7:42 AM, Steven Rostedt wrote:
>> On Tue, 27 Mar 2018 10:18:24 -0400 (EDT)
>> Mathieu Desnoyers <mathieu.desnoyers@efficios.com> wrote:
>>> ----- On Mar 27, 2018, at 10:07 AM, rostedt rostedt@goodmis.org wrote:
>>>> On Mon, 26 Mar 2018 19:47:02 -0700
>>>> Alexei Starovoitov <ast@fb.com> wrote:
>>>>> From: Alexei Starovoitov <ast@kernel.org>
>>>>>
>>>>> introduce kernel_tracepoint_find_by_name() helper to let bpf core
>>>>> find tracepoint by name and later attach bpf probe to a tracepoint
>>>>>
>>>>> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
>>>>
>>>> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
>>>
>>> Steven showed preference for tracepoint_kernel_find_by_name() at some
>>> point (starting with a tracepoint_ prefix). I'm find with either of
>>> the names.
>>
>> Yeah, I do prefer tracepoint_kernel_find_by_name() to stay consistent
>> with the other tracepoint functions. But we have
>> "for_each_kernel_tracepoint()" and not "for_each_tracepoint_kernel()",
>> thus we need to pick being consistent with one or the other. One answer
>> is to use tracpoint_kernel_find_by_name() and rename the for_each to
>> for_each_tracpoint_kernel().
> 
> yep. that's exactly the reason I picked kernel_tracepoint_find_by_name()
> to match for_each_kernel_tracepoint() naming.
> 
> I can certainly send a follow up patch to rename both to
> *tracepoint_kernel* and then you can nack it because it breaks lttng :)
> but let's do it in a separate thread.
> 
> Daniel,
> do you mind adding { } as Steven requested while applying or
> you want me to resubmit the whole thing?

Yeah, I can fix it up while applying.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
  2018-03-27  2:47   ` Alexei Starovoitov
@ 2018-03-27 17:02     ` Steven Rostedt
  -1 siblings, 0 replies; 57+ messages in thread
From: Steven Rostedt @ 2018-03-27 17:02 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers, Kees Cook

On Mon, 26 Mar 2018 19:47:03 -0700
Alexei Starovoitov <ast@fb.com> wrote:


> Ctrl-C of tracing daemon or cmdline tool that uses this feature
> will automatically detach bpf program, unload it and
> unregister tracepoint probe.
> 
> On the kernel side for_each_kernel_tracepoint() is used

You need to update the change log to state
kernel_tracepoint_find_by_name().

But looking at the code, I really think you should do it properly and
not rely on a hack that finds your function via kallsyms lookup and
then executing the address it returns.

> to find a tracepoint with "xdp_exception" name
> (that would be __tracepoint_xdp_exception record)
> 
> Then kallsyms_lookup_name() is used to find the addr
> of __bpf_trace_xdp_exception() probe function.
> 
> And finally tracepoint_probe_register() is used to connect probe
> with tracepoint.
> 
> Addition of bpf_raw_tracepoint doesn't interfere with ftrace and perf
> tracepoint mechanisms. perf_event_open() can be used in parallel
> on the same tracepoint.
> Multiple bpf_raw_tracepoint_open("xdp_exception", prog_fd) are permitted.
> Each with its own bpf program. The kernel will execute
> all tracepoint probes and all attached bpf programs.
> 
> In the future bpf_raw_tracepoints can be extended with
> query/introspection logic.
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---
>  include/linux/bpf_types.h    |   1 +
>  include/linux/trace_events.h |  37 +++++++++
>  include/trace/bpf_probe.h    |  87 ++++++++++++++++++++
>  include/trace/define_trace.h |   1 +
>  include/uapi/linux/bpf.h     |  11 +++
>  kernel/bpf/syscall.c         |  78 ++++++++++++++++++
>  kernel/trace/bpf_trace.c     | 188 +++++++++++++++++++++++++++++++++++++++++++
>  7 files changed, 403 insertions(+)
>  create mode 100644 include/trace/bpf_probe.h
> 
>


> --- a/include/linux/trace_events.h
> +++ b/include/linux/trace_events.h
> @@ -468,6 +468,8 @@ unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx);
>  int perf_event_attach_bpf_prog(struct perf_event *event, struct bpf_prog *prog);
>  void perf_event_detach_bpf_prog(struct perf_event *event);
>  int perf_event_query_prog_array(struct perf_event *event, void __user *info);
> +int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog);
> +int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog);
>  #else
>  static inline unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx)
>  {
> @@ -487,6 +489,14 @@ perf_event_query_prog_array(struct perf_event *event, void __user *info)
>  {
>  	return -EOPNOTSUPP;
>  }
> +static inline int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *p)
> +{
> +	return -EOPNOTSUPP;
> +}
> +static inline int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *p)
> +{
> +	return -EOPNOTSUPP;
> +}
>  #endif
>  
>  enum {
> @@ -546,6 +556,33 @@ extern void ftrace_profile_free_filter(struct perf_event *event);
>  void perf_trace_buf_update(void *record, u16 type);
>  void *perf_trace_buf_alloc(int size, struct pt_regs **regs, int *rctxp);
>  
> +void bpf_trace_run1(struct bpf_prog *prog, u64 arg1);
> +void bpf_trace_run2(struct bpf_prog *prog, u64 arg1, u64 arg2);
> +void bpf_trace_run3(struct bpf_prog *prog, u64 arg1, u64 arg2,
> +		    u64 arg3);
> +void bpf_trace_run4(struct bpf_prog *prog, u64 arg1, u64 arg2,
> +		    u64 arg3, u64 arg4);
> +void bpf_trace_run5(struct bpf_prog *prog, u64 arg1, u64 arg2,
> +		    u64 arg3, u64 arg4, u64 arg5);
> +void bpf_trace_run6(struct bpf_prog *prog, u64 arg1, u64 arg2,
> +		    u64 arg3, u64 arg4, u64 arg5, u64 arg6);
> +void bpf_trace_run7(struct bpf_prog *prog, u64 arg1, u64 arg2,
> +		    u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7);
> +void bpf_trace_run8(struct bpf_prog *prog, u64 arg1, u64 arg2,
> +		    u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7,
> +		    u64 arg8);
> +void bpf_trace_run9(struct bpf_prog *prog, u64 arg1, u64 arg2,
> +		    u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7,
> +		    u64 arg8, u64 arg9);
> +void bpf_trace_run10(struct bpf_prog *prog, u64 arg1, u64 arg2,
> +		     u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7,
> +		     u64 arg8, u64 arg9, u64 arg10);
> +void bpf_trace_run11(struct bpf_prog *prog, u64 arg1, u64 arg2,
> +		     u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7,
> +		     u64 arg8, u64 arg9, u64 arg10, u64 arg11);
> +void bpf_trace_run12(struct bpf_prog *prog, u64 arg1, u64 arg2,
> +		     u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7,
> +		     u64 arg8, u64 arg9, u64 arg10, u64 arg11, u64 arg12);
>  void perf_trace_run_bpf_submit(void *raw_data, int size, int rctx,
>  			       struct trace_event_call *call, u64 count,
>  			       struct pt_regs *regs, struct hlist_head *head,
> diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h
> new file mode 100644
> index 000000000000..d2cc0663e618
> --- /dev/null
> +++ b/include/trace/bpf_probe.h
> @@ -0,0 +1,87 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#undef TRACE_SYSTEM_VAR
> +
> +#ifdef CONFIG_BPF_EVENTS
> +
> +#undef __entry
> +#define __entry entry
> +
> +#undef __get_dynamic_array
> +#define __get_dynamic_array(field)	\
> +		((void *)__entry + (__entry->__data_loc_##field & 0xffff))
> +
> +#undef __get_dynamic_array_len
> +#define __get_dynamic_array_len(field)	\
> +		((__entry->__data_loc_##field >> 16) & 0xffff)
> +
> +#undef __get_str
> +#define __get_str(field) ((char *)__get_dynamic_array(field))
> +
> +#undef __get_bitmask
> +#define __get_bitmask(field) (char *)__get_dynamic_array(field)
> +
> +#undef __perf_count
> +#define __perf_count(c)	(c)
> +
> +#undef __perf_task
> +#define __perf_task(t)	(t)
> +
> +/* cast any integer, pointer, or small struct to u64 */
> +#define UINTTYPE(size) \
> +	__typeof__(__builtin_choose_expr(size == 1,  (u8)1, \
> +		   __builtin_choose_expr(size == 2, (u16)2, \
> +		   __builtin_choose_expr(size == 4, (u32)3, \
> +		   __builtin_choose_expr(size == 8, (u64)4, \
> +					 (void)5)))))
> +#define __CAST_TO_U64(x) ({ \
> +	typeof(x) __src = (x); \
> +	UINTTYPE(sizeof(x)) __dst; \
> +	memcpy(&__dst, &__src, sizeof(__dst)); \
> +	(u64)__dst; })
> +
> +#define __CAST1(a,...) __CAST_TO_U64(a)
> +#define __CAST2(a,...) __CAST_TO_U64(a), __CAST1(__VA_ARGS__)
> +#define __CAST3(a,...) __CAST_TO_U64(a), __CAST2(__VA_ARGS__)
> +#define __CAST4(a,...) __CAST_TO_U64(a), __CAST3(__VA_ARGS__)
> +#define __CAST5(a,...) __CAST_TO_U64(a), __CAST4(__VA_ARGS__)
> +#define __CAST6(a,...) __CAST_TO_U64(a), __CAST5(__VA_ARGS__)
> +#define __CAST7(a,...) __CAST_TO_U64(a), __CAST6(__VA_ARGS__)
> +#define __CAST8(a,...) __CAST_TO_U64(a), __CAST7(__VA_ARGS__)
> +#define __CAST9(a,...) __CAST_TO_U64(a), __CAST8(__VA_ARGS__)
> +#define __CAST10(a,...) __CAST_TO_U64(a), __CAST9(__VA_ARGS__)
> +#define __CAST11(a,...) __CAST_TO_U64(a), __CAST10(__VA_ARGS__)
> +#define __CAST12(a,...) __CAST_TO_U64(a), __CAST11(__VA_ARGS__)
> +/* tracepoints with more than 12 arguments will hit build error */
> +#define CAST_TO_U64(...) CONCATENATE(__CAST, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__)
> +
> +#undef DECLARE_EVENT_CLASS
> +#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print)	\
> +/* no 'static' here. The bpf probe functions are global */		\
> +notrace void								\

I'm curious to why you have notrace here? Since it is separate from
perf and ftrace, for debugging purposes, it could be useful to allow
function tracing to this function.

> +__bpf_trace_##call(void *__data, proto)					\
> +{									\
> +	struct bpf_prog *prog = __data;					\
> +	\
> +	CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(prog, CAST_TO_U64(args));	\
> +}
> +
> +/*
> + * This part is compiled out, it is only here as a build time check
> + * to make sure that if the tracepoint handling changes, the
> + * bpf probe will fail to compile unless it too is updated.
> + */
> +#undef DEFINE_EVENT
> +#define DEFINE_EVENT(template, call, proto, args)			\
> +static inline void bpf_test_probe_##call(void)				\
> +{									\
> +	check_trace_callback_type_##call(__bpf_trace_##template);	\
> +}
> +
> +
> +#undef DEFINE_EVENT_PRINT
> +#define DEFINE_EVENT_PRINT(template, name, proto, args, print)	\
> +	DEFINE_EVENT(template, name, PARAMS(proto), PARAMS(args))
> +
> +#include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
> +#endif /* CONFIG_BPF_EVENTS */
> diff --git a/include/trace/define_trace.h b/include/trace/define_trace.h
> index 96b22ace9ae7..5f8216bc261f 100644
> --- a/include/trace/define_trace.h
> +++ b/include/trace/define_trace.h
> @@ -95,6 +95,7 @@
>  #ifdef TRACEPOINTS_ENABLED
>  #include <trace/trace_events.h>
>  #include <trace/perf.h>
> +#include <trace/bpf_probe.h>
>  #endif
>  
>  #undef TRACE_EVENT
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 18b7c510c511..1878201c2d77 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -94,6 +94,7 @@ enum bpf_cmd {
>  	BPF_MAP_GET_FD_BY_ID,
>  	BPF_OBJ_GET_INFO_BY_FD,
>  	BPF_PROG_QUERY,
> +	BPF_RAW_TRACEPOINT_OPEN,
>  };
>  
>  enum bpf_map_type {
> @@ -134,6 +135,7 @@ enum bpf_prog_type {
>  	BPF_PROG_TYPE_SK_SKB,
>  	BPF_PROG_TYPE_CGROUP_DEVICE,
>  	BPF_PROG_TYPE_SK_MSG,
> +	BPF_PROG_TYPE_RAW_TRACEPOINT,
>  };
>  
>  enum bpf_attach_type {
> @@ -344,6 +346,11 @@ union bpf_attr {
>  		__aligned_u64	prog_ids;
>  		__u32		prog_cnt;
>  	} query;
> +
> +	struct {
> +		__u64 name;
> +		__u32 prog_fd;
> +	} raw_tracepoint;
>  } __attribute__((aligned(8)));
>  
>  /* BPF helper function descriptions:
> @@ -1152,4 +1159,8 @@ struct bpf_cgroup_dev_ctx {
>  	__u32 minor;
>  };
>  
> +struct bpf_raw_tracepoint_args {
> +	__u64 args[0];
> +};
> +
>  #endif /* _UAPI__LINUX_BPF_H__ */
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 3aeb4ea2a93a..7486b450672e 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -1311,6 +1311,81 @@ static int bpf_obj_get(const union bpf_attr *attr)
>  				attr->file_flags);
>  }
>  
> +struct bpf_raw_tracepoint {
> +	struct tracepoint *tp;
> +	struct bpf_prog *prog;
> +};
> +
> +static int bpf_raw_tracepoint_release(struct inode *inode, struct file *filp)
> +{
> +	struct bpf_raw_tracepoint *raw_tp = filp->private_data;
> +
> +	if (raw_tp->prog) {
> +		bpf_probe_unregister(raw_tp->tp, raw_tp->prog);
> +		bpf_prog_put(raw_tp->prog);
> +	}
> +	kfree(raw_tp);
> +	return 0;
> +}
> +
> +static const struct file_operations bpf_raw_tp_fops = {
> +	.release	= bpf_raw_tracepoint_release,
> +	.read		= bpf_dummy_read,
> +	.write		= bpf_dummy_write,
> +};
> +
> +#define BPF_RAW_TRACEPOINT_OPEN_LAST_FIELD raw_tracepoint.prog_fd
> +
> +static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
> +{
> +	struct bpf_raw_tracepoint *raw_tp;
> +	struct tracepoint *tp;
> +	struct bpf_prog *prog;
> +	char tp_name[128];
> +	int tp_fd, err;
> +
> +	if (strncpy_from_user(tp_name, u64_to_user_ptr(attr->raw_tracepoint.name),
> +			      sizeof(tp_name) - 1) < 0)
> +		return -EFAULT;
> +	tp_name[sizeof(tp_name) - 1] = 0;
> +
> +	tp = kernel_tracepoint_find_by_name(tp_name);
> +	if (!tp)
> +		return -ENOENT;
> +
> +	raw_tp = kmalloc(sizeof(*raw_tp), GFP_USER | __GFP_ZERO);

Please use kzalloc(), instead of open coding the "__GPF_ZERO"

> +	if (!raw_tp)
> +		return -ENOMEM;
> +	raw_tp->tp = tp;
> +
> +	prog = bpf_prog_get_type(attr->raw_tracepoint.prog_fd,
> +				 BPF_PROG_TYPE_RAW_TRACEPOINT);
> +	if (IS_ERR(prog)) {
> +		err = PTR_ERR(prog);
> +		goto out_free_tp;
> +	}
> +
> +	err = bpf_probe_register(raw_tp->tp, prog);
> +	if (err)
> +		goto out_put_prog;
> +
> +	raw_tp->prog = prog;
> +	tp_fd = anon_inode_getfd("bpf-raw-tracepoint", &bpf_raw_tp_fops, raw_tp,
> +				 O_CLOEXEC);
> +	if (tp_fd < 0) {
> +		bpf_probe_unregister(raw_tp->tp, prog);
> +		err = tp_fd;
> +		goto out_put_prog;
> +	}
> +	return tp_fd;
> +
> +out_put_prog:
> +	bpf_prog_put(prog);
> +out_free_tp:
> +	kfree(raw_tp);
> +	return err;
> +}
> +
>  #ifdef CONFIG_CGROUP_BPF
>  
>  #define BPF_PROG_ATTACH_LAST_FIELD attach_flags
> @@ -1921,6 +1996,9 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
>  	case BPF_OBJ_GET_INFO_BY_FD:
>  		err = bpf_obj_get_info_by_fd(&attr, uattr);
>  		break;
> +	case BPF_RAW_TRACEPOINT_OPEN:
> +		err = bpf_raw_tracepoint_open(&attr);
> +		break;
>  	default:
>  		err = -EINVAL;
>  		break;
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index c634e093951f..00e86aa11360 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -723,6 +723,86 @@ const struct bpf_verifier_ops tracepoint_verifier_ops = {
>  const struct bpf_prog_ops tracepoint_prog_ops = {
>  };
>  
> +/*
> + * bpf_raw_tp_regs are separate from bpf_pt_regs used from skb/xdp
> + * to avoid potential recursive reuse issue when/if tracepoints are added
> + * inside bpf_*_event_output and/or bpf_get_stack_id
> + */
> +static DEFINE_PER_CPU(struct pt_regs, bpf_raw_tp_regs);
> +BPF_CALL_5(bpf_perf_event_output_raw_tp, struct bpf_raw_tracepoint_args *, args,
> +	   struct bpf_map *, map, u64, flags, void *, data, u64, size)
> +{
> +	struct pt_regs *regs = this_cpu_ptr(&bpf_raw_tp_regs);
> +
> +	perf_fetch_caller_regs(regs);
> +	return ____bpf_perf_event_output(regs, map, flags, data, size);
> +}
> +
> +static const struct bpf_func_proto bpf_perf_event_output_proto_raw_tp = {
> +	.func		= bpf_perf_event_output_raw_tp,
> +	.gpl_only	= true,
> +	.ret_type	= RET_INTEGER,
> +	.arg1_type	= ARG_PTR_TO_CTX,
> +	.arg2_type	= ARG_CONST_MAP_PTR,
> +	.arg3_type	= ARG_ANYTHING,
> +	.arg4_type	= ARG_PTR_TO_MEM,
> +	.arg5_type	= ARG_CONST_SIZE_OR_ZERO,
> +};
> +
> +BPF_CALL_3(bpf_get_stackid_raw_tp, struct bpf_raw_tracepoint_args *, args,
> +	   struct bpf_map *, map, u64, flags)
> +{
> +	struct pt_regs *regs = this_cpu_ptr(&bpf_raw_tp_regs);
> +
> +	perf_fetch_caller_regs(regs);
> +	/* similar to bpf_perf_event_output_tp, but pt_regs fetched differently */
> +	return bpf_get_stackid((unsigned long) regs, (unsigned long) map,
> +			       flags, 0, 0);
> +}
> +
> +static const struct bpf_func_proto bpf_get_stackid_proto_raw_tp = {
> +	.func		= bpf_get_stackid_raw_tp,
> +	.gpl_only	= true,
> +	.ret_type	= RET_INTEGER,
> +	.arg1_type	= ARG_PTR_TO_CTX,
> +	.arg2_type	= ARG_CONST_MAP_PTR,
> +	.arg3_type	= ARG_ANYTHING,
> +};
> +
> +static const struct bpf_func_proto *raw_tp_prog_func_proto(enum bpf_func_id func_id)
> +{
> +	switch (func_id) {
> +	case BPF_FUNC_perf_event_output:
> +		return &bpf_perf_event_output_proto_raw_tp;
> +	case BPF_FUNC_get_stackid:
> +		return &bpf_get_stackid_proto_raw_tp;
> +	default:
> +		return tracing_func_proto(func_id);
> +	}
> +}
> +
> +static bool raw_tp_prog_is_valid_access(int off, int size,
> +					enum bpf_access_type type,
> +					struct bpf_insn_access_aux *info)
> +{
> +	/* largest tracepoint in the kernel has 12 args */
> +	if (off < 0 || off >= sizeof(__u64) * 12)
> +		return false;
> +	if (type != BPF_READ)
> +		return false;
> +	if (off % size != 0)
> +		return false;
> +	return true;
> +}
> +
> +const struct bpf_verifier_ops raw_tracepoint_verifier_ops = {
> +	.get_func_proto  = raw_tp_prog_func_proto,
> +	.is_valid_access = raw_tp_prog_is_valid_access,
> +};
> +
> +const struct bpf_prog_ops raw_tracepoint_prog_ops = {
> +};
> +
>  static bool pe_prog_is_valid_access(int off, int size, enum bpf_access_type type,
>  				    struct bpf_insn_access_aux *info)
>  {
> @@ -896,3 +976,111 @@ int perf_event_query_prog_array(struct perf_event *event, void __user *info)
>  
>  	return ret;
>  }
> +
> +static __always_inline
> +void __bpf_trace_run(struct bpf_prog *prog, u64 *args)
> +{
> +	rcu_read_lock();
> +	preempt_disable();
> +	(void) BPF_PROG_RUN(prog, args);
> +	preempt_enable();
> +	rcu_read_unlock();
> +}
> +

Could you add some comments here to explain what the below is doing.

> +#define UNPACK(...)			__VA_ARGS__
> +#define REPEAT_1(FN, DL, X, ...)	FN(X)
> +#define REPEAT_2(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_1(FN, DL, __VA_ARGS__)
> +#define REPEAT_3(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_2(FN, DL, __VA_ARGS__)
> +#define REPEAT_4(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_3(FN, DL, __VA_ARGS__)
> +#define REPEAT_5(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_4(FN, DL, __VA_ARGS__)
> +#define REPEAT_6(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_5(FN, DL, __VA_ARGS__)
> +#define REPEAT_7(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_6(FN, DL, __VA_ARGS__)
> +#define REPEAT_8(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_7(FN, DL, __VA_ARGS__)
> +#define REPEAT_9(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_8(FN, DL, __VA_ARGS__)
> +#define REPEAT_10(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_9(FN, DL, __VA_ARGS__)
> +#define REPEAT_11(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_10(FN, DL, __VA_ARGS__)
> +#define REPEAT_12(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_11(FN, DL, __VA_ARGS__)
> +#define REPEAT(X, FN, DL, ...)		REPEAT_##X(FN, DL, __VA_ARGS__)
> +
> +#define SARG(X)		u64 arg##X
> +#define COPY(X)		args[X] = arg##X
> +
> +#define __DL_COM	(,)
> +#define __DL_SEM	(;)
> +
> +#define __SEQ_0_11	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
> +
> +#define BPF_TRACE_DEFN_x(x)						\
> +	void bpf_trace_run##x(struct bpf_prog *prog,			\
> +			      REPEAT(x, SARG, __DL_COM, __SEQ_0_11))	\
> +	{								\
> +		u64 args[x];						\
> +		REPEAT(x, COPY, __DL_SEM, __SEQ_0_11);			\
> +		__bpf_trace_run(prog, args);				\
> +	}								\
> +	EXPORT_SYMBOL_GPL(bpf_trace_run##x)
> +BPF_TRACE_DEFN_x(1);
> +BPF_TRACE_DEFN_x(2);
> +BPF_TRACE_DEFN_x(3);
> +BPF_TRACE_DEFN_x(4);
> +BPF_TRACE_DEFN_x(5);
> +BPF_TRACE_DEFN_x(6);
> +BPF_TRACE_DEFN_x(7);
> +BPF_TRACE_DEFN_x(8);
> +BPF_TRACE_DEFN_x(9);
> +BPF_TRACE_DEFN_x(10);
> +BPF_TRACE_DEFN_x(11);
> +BPF_TRACE_DEFN_x(12);
> +
> +static int __bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog)
> +{
> +	unsigned long addr;
> +	char buf[128];
> +
> +	/*
> +	 * check that program doesn't access arguments beyond what's
> +	 * available in this tracepoint
> +	 */
> +	if (prog->aux->max_ctx_offset > tp->num_args * sizeof(u64))
> +		return -EINVAL;
> +
> +	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
> +	addr = kallsyms_lookup_name(buf);
> +	if (!addr)
> +		return -ENOENT;
> +
> +	return tracepoint_probe_register(tp, (void *)addr, prog);

You are putting in a hell of a lot of trust with kallsyms returning
properly. I can see this being very fragile. This is calling a function
based on the result of kallsyms. I'm sure the security folks would love
this.

There's a few things to make this a bit more robust. One is to add a
table that points to all __bpf_trace_* functions, and verify that the
result from kallsyms is in that table.

Honestly, I think this is too much of a short cut and a hack. I know
you want to keep it "simple" and save space, but you really should do
it the same way ftrace and perf do it. That is, create a section and
have all tracepoints create a structure that holds a pointer to the
tracepoint and to the bpf probe function. Then you don't even need the
kernel_tracepoint_find_by_name(), you just iterate over your table and
you get the tracepoint and the bpf function associated to it.

Relying on kallsyms to return an address to execute is just way too
extreme and fragile for my liking.

-- Steve



> +}
> +
> +int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog)
> +{
> +	int err;
> +
> +	mutex_lock(&bpf_event_mutex);
> +	err = __bpf_probe_register(tp, prog);
> +	mutex_unlock(&bpf_event_mutex);
> +	return err;
> +}
> +
> +static int __bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog)
> +{
> +	unsigned long addr;
> +	char buf[128];
> +
> +	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
> +	addr = kallsyms_lookup_name(buf);
> +	if (!addr)
> +		return -ENOENT;
> +
> +	return tracepoint_probe_unregister(tp, (void *)addr, prog);
> +}
> +
> +int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog)
> +{
> +	int err;
> +
> +	mutex_lock(&bpf_event_mutex);
> +	err = __bpf_probe_unregister(tp, prog);
> +	mutex_unlock(&bpf_event_mutex);
> +	return err;
> +}

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
@ 2018-03-27 17:02     ` Steven Rostedt
  0 siblings, 0 replies; 57+ messages in thread
From: Steven Rostedt @ 2018-03-27 17:02 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers, Kees Cook

On Mon, 26 Mar 2018 19:47:03 -0700
Alexei Starovoitov <ast@fb.com> wrote:


> Ctrl-C of tracing daemon or cmdline tool that uses this feature
> will automatically detach bpf program, unload it and
> unregister tracepoint probe.
> 
> On the kernel side for_each_kernel_tracepoint() is used

You need to update the change log to state
kernel_tracepoint_find_by_name().

But looking at the code, I really think you should do it properly and
not rely on a hack that finds your function via kallsyms lookup and
then executing the address it returns.

> to find a tracepoint with "xdp_exception" name
> (that would be __tracepoint_xdp_exception record)
> 
> Then kallsyms_lookup_name() is used to find the addr
> of __bpf_trace_xdp_exception() probe function.
> 
> And finally tracepoint_probe_register() is used to connect probe
> with tracepoint.
> 
> Addition of bpf_raw_tracepoint doesn't interfere with ftrace and perf
> tracepoint mechanisms. perf_event_open() can be used in parallel
> on the same tracepoint.
> Multiple bpf_raw_tracepoint_open("xdp_exception", prog_fd) are permitted.
> Each with its own bpf program. The kernel will execute
> all tracepoint probes and all attached bpf programs.
> 
> In the future bpf_raw_tracepoints can be extended with
> query/introspection logic.
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---
>  include/linux/bpf_types.h    |   1 +
>  include/linux/trace_events.h |  37 +++++++++
>  include/trace/bpf_probe.h    |  87 ++++++++++++++++++++
>  include/trace/define_trace.h |   1 +
>  include/uapi/linux/bpf.h     |  11 +++
>  kernel/bpf/syscall.c         |  78 ++++++++++++++++++
>  kernel/trace/bpf_trace.c     | 188 +++++++++++++++++++++++++++++++++++++++++++
>  7 files changed, 403 insertions(+)
>  create mode 100644 include/trace/bpf_probe.h
> 
>


> --- a/include/linux/trace_events.h
> +++ b/include/linux/trace_events.h
> @@ -468,6 +468,8 @@ unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx);
>  int perf_event_attach_bpf_prog(struct perf_event *event, struct bpf_prog *prog);
>  void perf_event_detach_bpf_prog(struct perf_event *event);
>  int perf_event_query_prog_array(struct perf_event *event, void __user *info);
> +int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog);
> +int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog);
>  #else
>  static inline unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx)
>  {
> @@ -487,6 +489,14 @@ perf_event_query_prog_array(struct perf_event *event, void __user *info)
>  {
>  	return -EOPNOTSUPP;
>  }
> +static inline int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *p)
> +{
> +	return -EOPNOTSUPP;
> +}
> +static inline int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *p)
> +{
> +	return -EOPNOTSUPP;
> +}
>  #endif
>  
>  enum {
> @@ -546,6 +556,33 @@ extern void ftrace_profile_free_filter(struct perf_event *event);
>  void perf_trace_buf_update(void *record, u16 type);
>  void *perf_trace_buf_alloc(int size, struct pt_regs **regs, int *rctxp);
>  
> +void bpf_trace_run1(struct bpf_prog *prog, u64 arg1);
> +void bpf_trace_run2(struct bpf_prog *prog, u64 arg1, u64 arg2);
> +void bpf_trace_run3(struct bpf_prog *prog, u64 arg1, u64 arg2,
> +		    u64 arg3);
> +void bpf_trace_run4(struct bpf_prog *prog, u64 arg1, u64 arg2,
> +		    u64 arg3, u64 arg4);
> +void bpf_trace_run5(struct bpf_prog *prog, u64 arg1, u64 arg2,
> +		    u64 arg3, u64 arg4, u64 arg5);
> +void bpf_trace_run6(struct bpf_prog *prog, u64 arg1, u64 arg2,
> +		    u64 arg3, u64 arg4, u64 arg5, u64 arg6);
> +void bpf_trace_run7(struct bpf_prog *prog, u64 arg1, u64 arg2,
> +		    u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7);
> +void bpf_trace_run8(struct bpf_prog *prog, u64 arg1, u64 arg2,
> +		    u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7,
> +		    u64 arg8);
> +void bpf_trace_run9(struct bpf_prog *prog, u64 arg1, u64 arg2,
> +		    u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7,
> +		    u64 arg8, u64 arg9);
> +void bpf_trace_run10(struct bpf_prog *prog, u64 arg1, u64 arg2,
> +		     u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7,
> +		     u64 arg8, u64 arg9, u64 arg10);
> +void bpf_trace_run11(struct bpf_prog *prog, u64 arg1, u64 arg2,
> +		     u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7,
> +		     u64 arg8, u64 arg9, u64 arg10, u64 arg11);
> +void bpf_trace_run12(struct bpf_prog *prog, u64 arg1, u64 arg2,
> +		     u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7,
> +		     u64 arg8, u64 arg9, u64 arg10, u64 arg11, u64 arg12);
>  void perf_trace_run_bpf_submit(void *raw_data, int size, int rctx,
>  			       struct trace_event_call *call, u64 count,
>  			       struct pt_regs *regs, struct hlist_head *head,
> diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h
> new file mode 100644
> index 000000000000..d2cc0663e618
> --- /dev/null
> +++ b/include/trace/bpf_probe.h
> @@ -0,0 +1,87 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#undef TRACE_SYSTEM_VAR
> +
> +#ifdef CONFIG_BPF_EVENTS
> +
> +#undef __entry
> +#define __entry entry
> +
> +#undef __get_dynamic_array
> +#define __get_dynamic_array(field)	\
> +		((void *)__entry + (__entry->__data_loc_##field & 0xffff))
> +
> +#undef __get_dynamic_array_len
> +#define __get_dynamic_array_len(field)	\
> +		((__entry->__data_loc_##field >> 16) & 0xffff)
> +
> +#undef __get_str
> +#define __get_str(field) ((char *)__get_dynamic_array(field))
> +
> +#undef __get_bitmask
> +#define __get_bitmask(field) (char *)__get_dynamic_array(field)
> +
> +#undef __perf_count
> +#define __perf_count(c)	(c)
> +
> +#undef __perf_task
> +#define __perf_task(t)	(t)
> +
> +/* cast any integer, pointer, or small struct to u64 */
> +#define UINTTYPE(size) \
> +	__typeof__(__builtin_choose_expr(size == 1,  (u8)1, \
> +		   __builtin_choose_expr(size == 2, (u16)2, \
> +		   __builtin_choose_expr(size == 4, (u32)3, \
> +		   __builtin_choose_expr(size == 8, (u64)4, \
> +					 (void)5)))))
> +#define __CAST_TO_U64(x) ({ \
> +	typeof(x) __src = (x); \
> +	UINTTYPE(sizeof(x)) __dst; \
> +	memcpy(&__dst, &__src, sizeof(__dst)); \
> +	(u64)__dst; })
> +
> +#define __CAST1(a,...) __CAST_TO_U64(a)
> +#define __CAST2(a,...) __CAST_TO_U64(a), __CAST1(__VA_ARGS__)
> +#define __CAST3(a,...) __CAST_TO_U64(a), __CAST2(__VA_ARGS__)
> +#define __CAST4(a,...) __CAST_TO_U64(a), __CAST3(__VA_ARGS__)
> +#define __CAST5(a,...) __CAST_TO_U64(a), __CAST4(__VA_ARGS__)
> +#define __CAST6(a,...) __CAST_TO_U64(a), __CAST5(__VA_ARGS__)
> +#define __CAST7(a,...) __CAST_TO_U64(a), __CAST6(__VA_ARGS__)
> +#define __CAST8(a,...) __CAST_TO_U64(a), __CAST7(__VA_ARGS__)
> +#define __CAST9(a,...) __CAST_TO_U64(a), __CAST8(__VA_ARGS__)
> +#define __CAST10(a,...) __CAST_TO_U64(a), __CAST9(__VA_ARGS__)
> +#define __CAST11(a,...) __CAST_TO_U64(a), __CAST10(__VA_ARGS__)
> +#define __CAST12(a,...) __CAST_TO_U64(a), __CAST11(__VA_ARGS__)
> +/* tracepoints with more than 12 arguments will hit build error */
> +#define CAST_TO_U64(...) CONCATENATE(__CAST, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__)
> +
> +#undef DECLARE_EVENT_CLASS
> +#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print)	\
> +/* no 'static' here. The bpf probe functions are global */		\
> +notrace void								\

I'm curious to why you have notrace here? Since it is separate from
perf and ftrace, for debugging purposes, it could be useful to allow
function tracing to this function.

> +__bpf_trace_##call(void *__data, proto)					\
> +{									\
> +	struct bpf_prog *prog = __data;					\
> +	\
> +	CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(prog, CAST_TO_U64(args));	\
> +}
> +
> +/*
> + * This part is compiled out, it is only here as a build time check
> + * to make sure that if the tracepoint handling changes, the
> + * bpf probe will fail to compile unless it too is updated.
> + */
> +#undef DEFINE_EVENT
> +#define DEFINE_EVENT(template, call, proto, args)			\
> +static inline void bpf_test_probe_##call(void)				\
> +{									\
> +	check_trace_callback_type_##call(__bpf_trace_##template);	\
> +}
> +
> +
> +#undef DEFINE_EVENT_PRINT
> +#define DEFINE_EVENT_PRINT(template, name, proto, args, print)	\
> +	DEFINE_EVENT(template, name, PARAMS(proto), PARAMS(args))
> +
> +#include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
> +#endif /* CONFIG_BPF_EVENTS */
> diff --git a/include/trace/define_trace.h b/include/trace/define_trace.h
> index 96b22ace9ae7..5f8216bc261f 100644
> --- a/include/trace/define_trace.h
> +++ b/include/trace/define_trace.h
> @@ -95,6 +95,7 @@
>  #ifdef TRACEPOINTS_ENABLED
>  #include <trace/trace_events.h>
>  #include <trace/perf.h>
> +#include <trace/bpf_probe.h>
>  #endif
>  
>  #undef TRACE_EVENT
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 18b7c510c511..1878201c2d77 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -94,6 +94,7 @@ enum bpf_cmd {
>  	BPF_MAP_GET_FD_BY_ID,
>  	BPF_OBJ_GET_INFO_BY_FD,
>  	BPF_PROG_QUERY,
> +	BPF_RAW_TRACEPOINT_OPEN,
>  };
>  
>  enum bpf_map_type {
> @@ -134,6 +135,7 @@ enum bpf_prog_type {
>  	BPF_PROG_TYPE_SK_SKB,
>  	BPF_PROG_TYPE_CGROUP_DEVICE,
>  	BPF_PROG_TYPE_SK_MSG,
> +	BPF_PROG_TYPE_RAW_TRACEPOINT,
>  };
>  
>  enum bpf_attach_type {
> @@ -344,6 +346,11 @@ union bpf_attr {
>  		__aligned_u64	prog_ids;
>  		__u32		prog_cnt;
>  	} query;
> +
> +	struct {
> +		__u64 name;
> +		__u32 prog_fd;
> +	} raw_tracepoint;
>  } __attribute__((aligned(8)));
>  
>  /* BPF helper function descriptions:
> @@ -1152,4 +1159,8 @@ struct bpf_cgroup_dev_ctx {
>  	__u32 minor;
>  };
>  
> +struct bpf_raw_tracepoint_args {
> +	__u64 args[0];
> +};
> +
>  #endif /* _UAPI__LINUX_BPF_H__ */
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 3aeb4ea2a93a..7486b450672e 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -1311,6 +1311,81 @@ static int bpf_obj_get(const union bpf_attr *attr)
>  				attr->file_flags);
>  }
>  
> +struct bpf_raw_tracepoint {
> +	struct tracepoint *tp;
> +	struct bpf_prog *prog;
> +};
> +
> +static int bpf_raw_tracepoint_release(struct inode *inode, struct file *filp)
> +{
> +	struct bpf_raw_tracepoint *raw_tp = filp->private_data;
> +
> +	if (raw_tp->prog) {
> +		bpf_probe_unregister(raw_tp->tp, raw_tp->prog);
> +		bpf_prog_put(raw_tp->prog);
> +	}
> +	kfree(raw_tp);
> +	return 0;
> +}
> +
> +static const struct file_operations bpf_raw_tp_fops = {
> +	.release	= bpf_raw_tracepoint_release,
> +	.read		= bpf_dummy_read,
> +	.write		= bpf_dummy_write,
> +};
> +
> +#define BPF_RAW_TRACEPOINT_OPEN_LAST_FIELD raw_tracepoint.prog_fd
> +
> +static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
> +{
> +	struct bpf_raw_tracepoint *raw_tp;
> +	struct tracepoint *tp;
> +	struct bpf_prog *prog;
> +	char tp_name[128];
> +	int tp_fd, err;
> +
> +	if (strncpy_from_user(tp_name, u64_to_user_ptr(attr->raw_tracepoint.name),
> +			      sizeof(tp_name) - 1) < 0)
> +		return -EFAULT;
> +	tp_name[sizeof(tp_name) - 1] = 0;
> +
> +	tp = kernel_tracepoint_find_by_name(tp_name);
> +	if (!tp)
> +		return -ENOENT;
> +
> +	raw_tp = kmalloc(sizeof(*raw_tp), GFP_USER | __GFP_ZERO);

Please use kzalloc(), instead of open coding the "__GPF_ZERO"

> +	if (!raw_tp)
> +		return -ENOMEM;
> +	raw_tp->tp = tp;
> +
> +	prog = bpf_prog_get_type(attr->raw_tracepoint.prog_fd,
> +				 BPF_PROG_TYPE_RAW_TRACEPOINT);
> +	if (IS_ERR(prog)) {
> +		err = PTR_ERR(prog);
> +		goto out_free_tp;
> +	}
> +
> +	err = bpf_probe_register(raw_tp->tp, prog);
> +	if (err)
> +		goto out_put_prog;
> +
> +	raw_tp->prog = prog;
> +	tp_fd = anon_inode_getfd("bpf-raw-tracepoint", &bpf_raw_tp_fops, raw_tp,
> +				 O_CLOEXEC);
> +	if (tp_fd < 0) {
> +		bpf_probe_unregister(raw_tp->tp, prog);
> +		err = tp_fd;
> +		goto out_put_prog;
> +	}
> +	return tp_fd;
> +
> +out_put_prog:
> +	bpf_prog_put(prog);
> +out_free_tp:
> +	kfree(raw_tp);
> +	return err;
> +}
> +
>  #ifdef CONFIG_CGROUP_BPF
>  
>  #define BPF_PROG_ATTACH_LAST_FIELD attach_flags
> @@ -1921,6 +1996,9 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
>  	case BPF_OBJ_GET_INFO_BY_FD:
>  		err = bpf_obj_get_info_by_fd(&attr, uattr);
>  		break;
> +	case BPF_RAW_TRACEPOINT_OPEN:
> +		err = bpf_raw_tracepoint_open(&attr);
> +		break;
>  	default:
>  		err = -EINVAL;
>  		break;
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index c634e093951f..00e86aa11360 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -723,6 +723,86 @@ const struct bpf_verifier_ops tracepoint_verifier_ops = {
>  const struct bpf_prog_ops tracepoint_prog_ops = {
>  };
>  
> +/*
> + * bpf_raw_tp_regs are separate from bpf_pt_regs used from skb/xdp
> + * to avoid potential recursive reuse issue when/if tracepoints are added
> + * inside bpf_*_event_output and/or bpf_get_stack_id
> + */
> +static DEFINE_PER_CPU(struct pt_regs, bpf_raw_tp_regs);
> +BPF_CALL_5(bpf_perf_event_output_raw_tp, struct bpf_raw_tracepoint_args *, args,
> +	   struct bpf_map *, map, u64, flags, void *, data, u64, size)
> +{
> +	struct pt_regs *regs = this_cpu_ptr(&bpf_raw_tp_regs);
> +
> +	perf_fetch_caller_regs(regs);
> +	return ____bpf_perf_event_output(regs, map, flags, data, size);
> +}
> +
> +static const struct bpf_func_proto bpf_perf_event_output_proto_raw_tp = {
> +	.func		= bpf_perf_event_output_raw_tp,
> +	.gpl_only	= true,
> +	.ret_type	= RET_INTEGER,
> +	.arg1_type	= ARG_PTR_TO_CTX,
> +	.arg2_type	= ARG_CONST_MAP_PTR,
> +	.arg3_type	= ARG_ANYTHING,
> +	.arg4_type	= ARG_PTR_TO_MEM,
> +	.arg5_type	= ARG_CONST_SIZE_OR_ZERO,
> +};
> +
> +BPF_CALL_3(bpf_get_stackid_raw_tp, struct bpf_raw_tracepoint_args *, args,
> +	   struct bpf_map *, map, u64, flags)
> +{
> +	struct pt_regs *regs = this_cpu_ptr(&bpf_raw_tp_regs);
> +
> +	perf_fetch_caller_regs(regs);
> +	/* similar to bpf_perf_event_output_tp, but pt_regs fetched differently */
> +	return bpf_get_stackid((unsigned long) regs, (unsigned long) map,
> +			       flags, 0, 0);
> +}
> +
> +static const struct bpf_func_proto bpf_get_stackid_proto_raw_tp = {
> +	.func		= bpf_get_stackid_raw_tp,
> +	.gpl_only	= true,
> +	.ret_type	= RET_INTEGER,
> +	.arg1_type	= ARG_PTR_TO_CTX,
> +	.arg2_type	= ARG_CONST_MAP_PTR,
> +	.arg3_type	= ARG_ANYTHING,
> +};
> +
> +static const struct bpf_func_proto *raw_tp_prog_func_proto(enum bpf_func_id func_id)
> +{
> +	switch (func_id) {
> +	case BPF_FUNC_perf_event_output:
> +		return &bpf_perf_event_output_proto_raw_tp;
> +	case BPF_FUNC_get_stackid:
> +		return &bpf_get_stackid_proto_raw_tp;
> +	default:
> +		return tracing_func_proto(func_id);
> +	}
> +}
> +
> +static bool raw_tp_prog_is_valid_access(int off, int size,
> +					enum bpf_access_type type,
> +					struct bpf_insn_access_aux *info)
> +{
> +	/* largest tracepoint in the kernel has 12 args */
> +	if (off < 0 || off >= sizeof(__u64) * 12)
> +		return false;
> +	if (type != BPF_READ)
> +		return false;
> +	if (off % size != 0)
> +		return false;
> +	return true;
> +}
> +
> +const struct bpf_verifier_ops raw_tracepoint_verifier_ops = {
> +	.get_func_proto  = raw_tp_prog_func_proto,
> +	.is_valid_access = raw_tp_prog_is_valid_access,
> +};
> +
> +const struct bpf_prog_ops raw_tracepoint_prog_ops = {
> +};
> +
>  static bool pe_prog_is_valid_access(int off, int size, enum bpf_access_type type,
>  				    struct bpf_insn_access_aux *info)
>  {
> @@ -896,3 +976,111 @@ int perf_event_query_prog_array(struct perf_event *event, void __user *info)
>  
>  	return ret;
>  }
> +
> +static __always_inline
> +void __bpf_trace_run(struct bpf_prog *prog, u64 *args)
> +{
> +	rcu_read_lock();
> +	preempt_disable();
> +	(void) BPF_PROG_RUN(prog, args);
> +	preempt_enable();
> +	rcu_read_unlock();
> +}
> +

Could you add some comments here to explain what the below is doing.

> +#define UNPACK(...)			__VA_ARGS__
> +#define REPEAT_1(FN, DL, X, ...)	FN(X)
> +#define REPEAT_2(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_1(FN, DL, __VA_ARGS__)
> +#define REPEAT_3(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_2(FN, DL, __VA_ARGS__)
> +#define REPEAT_4(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_3(FN, DL, __VA_ARGS__)
> +#define REPEAT_5(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_4(FN, DL, __VA_ARGS__)
> +#define REPEAT_6(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_5(FN, DL, __VA_ARGS__)
> +#define REPEAT_7(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_6(FN, DL, __VA_ARGS__)
> +#define REPEAT_8(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_7(FN, DL, __VA_ARGS__)
> +#define REPEAT_9(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_8(FN, DL, __VA_ARGS__)
> +#define REPEAT_10(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_9(FN, DL, __VA_ARGS__)
> +#define REPEAT_11(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_10(FN, DL, __VA_ARGS__)
> +#define REPEAT_12(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_11(FN, DL, __VA_ARGS__)
> +#define REPEAT(X, FN, DL, ...)		REPEAT_##X(FN, DL, __VA_ARGS__)
> +
> +#define SARG(X)		u64 arg##X
> +#define COPY(X)		args[X] = arg##X
> +
> +#define __DL_COM	(,)
> +#define __DL_SEM	(;)
> +
> +#define __SEQ_0_11	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
> +
> +#define BPF_TRACE_DEFN_x(x)						\
> +	void bpf_trace_run##x(struct bpf_prog *prog,			\
> +			      REPEAT(x, SARG, __DL_COM, __SEQ_0_11))	\
> +	{								\
> +		u64 args[x];						\
> +		REPEAT(x, COPY, __DL_SEM, __SEQ_0_11);			\
> +		__bpf_trace_run(prog, args);				\
> +	}								\
> +	EXPORT_SYMBOL_GPL(bpf_trace_run##x)
> +BPF_TRACE_DEFN_x(1);
> +BPF_TRACE_DEFN_x(2);
> +BPF_TRACE_DEFN_x(3);
> +BPF_TRACE_DEFN_x(4);
> +BPF_TRACE_DEFN_x(5);
> +BPF_TRACE_DEFN_x(6);
> +BPF_TRACE_DEFN_x(7);
> +BPF_TRACE_DEFN_x(8);
> +BPF_TRACE_DEFN_x(9);
> +BPF_TRACE_DEFN_x(10);
> +BPF_TRACE_DEFN_x(11);
> +BPF_TRACE_DEFN_x(12);
> +
> +static int __bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog)
> +{
> +	unsigned long addr;
> +	char buf[128];
> +
> +	/*
> +	 * check that program doesn't access arguments beyond what's
> +	 * available in this tracepoint
> +	 */
> +	if (prog->aux->max_ctx_offset > tp->num_args * sizeof(u64))
> +		return -EINVAL;
> +
> +	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
> +	addr = kallsyms_lookup_name(buf);
> +	if (!addr)
> +		return -ENOENT;
> +
> +	return tracepoint_probe_register(tp, (void *)addr, prog);

You are putting in a hell of a lot of trust with kallsyms returning
properly. I can see this being very fragile. This is calling a function
based on the result of kallsyms. I'm sure the security folks would love
this.

There's a few things to make this a bit more robust. One is to add a
table that points to all __bpf_trace_* functions, and verify that the
result from kallsyms is in that table.

Honestly, I think this is too much of a short cut and a hack. I know
you want to keep it "simple" and save space, but you really should do
it the same way ftrace and perf do it. That is, create a section and
have all tracepoints create a structure that holds a pointer to the
tracepoint and to the bpf probe function. Then you don't even need the
kernel_tracepoint_find_by_name(), you just iterate over your table and
you get the tracepoint and the bpf function associated to it.

Relying on kallsyms to return an address to execute is just way too
extreme and fragile for my liking.

-- Steve



> +}
> +
> +int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog)
> +{
> +	int err;
> +
> +	mutex_lock(&bpf_event_mutex);
> +	err = __bpf_probe_register(tp, prog);
> +	mutex_unlock(&bpf_event_mutex);
> +	return err;
> +}
> +
> +static int __bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog)
> +{
> +	unsigned long addr;
> +	char buf[128];
> +
> +	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
> +	addr = kallsyms_lookup_name(buf);
> +	if (!addr)
> +		return -ENOENT;
> +
> +	return tracepoint_probe_unregister(tp, (void *)addr, prog);
> +}
> +
> +int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog)
> +{
> +	int err;
> +
> +	mutex_lock(&bpf_event_mutex);
> +	err = __bpf_probe_unregister(tp, prog);
> +	mutex_unlock(&bpf_event_mutex);
> +	return err;
> +}

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
  2018-03-27 17:02     ` Steven Rostedt
@ 2018-03-27 17:11       ` Steven Rostedt
  -1 siblings, 0 replies; 57+ messages in thread
From: Steven Rostedt @ 2018-03-27 17:11 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers, Kees Cook

On Tue, 27 Mar 2018 13:02:11 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> Honestly, I think this is too much of a short cut and a hack. I know
> you want to keep it "simple" and save space, but you really should do
> it the same way ftrace and perf do it. That is, create a section and
> have all tracepoints create a structure that holds a pointer to the
> tracepoint and to the bpf probe function. Then you don't even need the
> kernel_tracepoint_find_by_name(), you just iterate over your table and
> you get the tracepoint and the bpf function associated to it.

Also, if you do it the perf/ftrace way, you get support for module
tracepoints pretty much for free. Which would include tracepoints in
networking code that is loaded by a module.

-- Steve

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
@ 2018-03-27 17:11       ` Steven Rostedt
  0 siblings, 0 replies; 57+ messages in thread
From: Steven Rostedt @ 2018-03-27 17:11 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers, Kees Cook

On Tue, 27 Mar 2018 13:02:11 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> Honestly, I think this is too much of a short cut and a hack. I know
> you want to keep it "simple" and save space, but you really should do
> it the same way ftrace and perf do it. That is, create a section and
> have all tracepoints create a structure that holds a pointer to the
> tracepoint and to the bpf probe function. Then you don't even need the
> kernel_tracepoint_find_by_name(), you just iterate over your table and
> you get the tracepoint and the bpf function associated to it.

Also, if you do it the perf/ftrace way, you get support for module
tracepoints pretty much for free. Which would include tracepoints in
networking code that is loaded by a module.

-- Steve

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
  2018-03-27 17:02     ` Steven Rostedt
@ 2018-03-27 18:45       ` Alexei Starovoitov
  -1 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27 18:45 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers, Kees Cook

On 3/27/18 10:02 AM, Steven Rostedt wrote:
> On Mon, 26 Mar 2018 19:47:03 -0700
> Alexei Starovoitov <ast@fb.com> wrote:
>
>
>> Ctrl-C of tracing daemon or cmdline tool that uses this feature
>> will automatically detach bpf program, unload it and
>> unregister tracepoint probe.
>>
>> On the kernel side for_each_kernel_tracepoint() is used
>
> You need to update the change log to state
> kernel_tracepoint_find_by_name().

ahh. right. will do.

>> +#undef DECLARE_EVENT_CLASS
>> +#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print)	\
>> +/* no 'static' here. The bpf probe functions are global */		\
>> +notrace void								\
>
> I'm curious to why you have notrace here? Since it is separate from
> perf and ftrace, for debugging purposes, it could be useful to allow
> function tracing to this function.

To avoid unnecessary overhead. And I don't think it's useful to trace 
them. They're tiny jump functions of one or two instructions.
Really no point wasting mentry on them.


>> +static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
>> +{
>> +	struct bpf_raw_tracepoint *raw_tp;
>> +	struct tracepoint *tp;
>> +	struct bpf_prog *prog;
>> +	char tp_name[128];
>> +	int tp_fd, err;
>> +
>> +	if (strncpy_from_user(tp_name, u64_to_user_ptr(attr->raw_tracepoint.name),
>> +			      sizeof(tp_name) - 1) < 0)
>> +		return -EFAULT;
>> +	tp_name[sizeof(tp_name) - 1] = 0;
>> +
>> +	tp = kernel_tracepoint_find_by_name(tp_name);
>> +	if (!tp)
>> +		return -ENOENT;
>> +
>> +	raw_tp = kmalloc(sizeof(*raw_tp), GFP_USER | __GFP_ZERO);
>
> Please use kzalloc(), instead of open coding the "__GPF_ZERO"

right. will do

>
> Could you add some comments here to explain what the below is doing.

To write a proper comment I need to understand it and I don't.
That's the reason why I didn't put in in common header,
because it would require proper comment on what it is and
how one can use it.
I'm expecting Daniel to follow up on this.

>> +#define UNPACK(...)			__VA_ARGS__
>> +#define REPEAT_1(FN, DL, X, ...)	FN(X)
>> +#define REPEAT_2(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_1(FN, DL, __VA_ARGS__)
>> +#define REPEAT_3(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_2(FN, DL, __VA_ARGS__)
>> +#define REPEAT_4(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_3(FN, DL, __VA_ARGS__)
>> +#define REPEAT_5(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_4(FN, DL, __VA_ARGS__)
>> +#define REPEAT_6(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_5(FN, DL, __VA_ARGS__)
>> +#define REPEAT_7(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_6(FN, DL, __VA_ARGS__)
>> +#define REPEAT_8(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_7(FN, DL, __VA_ARGS__)
>> +#define REPEAT_9(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_8(FN, DL, __VA_ARGS__)
>> +#define REPEAT_10(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_9(FN, DL, __VA_ARGS__)
>> +#define REPEAT_11(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_10(FN, DL, __VA_ARGS__)
>> +#define REPEAT_12(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_11(FN, DL, __VA_ARGS__)
>> +#define REPEAT(X, FN, DL, ...)		REPEAT_##X(FN, DL, __VA_ARGS__)
>> +

>> +
>> +	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
>> +	addr = kallsyms_lookup_name(buf);
>> +	if (!addr)
>> +		return -ENOENT;
>> +
>> +	return tracepoint_probe_register(tp, (void *)addr, prog);
>
> You are putting in a hell of a lot of trust with kallsyms returning
> properly. I can see this being very fragile. This is calling a function
> based on the result of kallsyms. I'm sure the security folks would love
> this.
>
> There's a few things to make this a bit more robust. One is to add a
> table that points to all __bpf_trace_* functions, and verify that the
> result from kallsyms is in that table.
>
> Honestly, I think this is too much of a short cut and a hack. I know
> you want to keep it "simple" and save space, but you really should do
> it the same way ftrace and perf do it. That is, create a section and
> have all tracepoints create a structure that holds a pointer to the
> tracepoint and to the bpf probe function. Then you don't even need the
> kernel_tracepoint_find_by_name(), you just iterate over your table and
> you get the tracepoint and the bpf function associated to it.
>
> Relying on kallsyms to return an address to execute is just way too
> extreme and fragile for my liking.

Wasting extra 8bytes * number_of_tracepoints just for lack of trust
in kallsyms doesn't sound like good trade off to me.
If kallsyms are inaccurate all sorts of things will break:
kprobes, livepatch, etc.
I'd rather suggest for ftrace to use kallsyms approach as well
and reduce memory footprint.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
@ 2018-03-27 18:45       ` Alexei Starovoitov
  0 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27 18:45 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers, Kees Cook

On 3/27/18 10:02 AM, Steven Rostedt wrote:
> On Mon, 26 Mar 2018 19:47:03 -0700
> Alexei Starovoitov <ast@fb.com> wrote:
>
>
>> Ctrl-C of tracing daemon or cmdline tool that uses this feature
>> will automatically detach bpf program, unload it and
>> unregister tracepoint probe.
>>
>> On the kernel side for_each_kernel_tracepoint() is used
>
> You need to update the change log to state
> kernel_tracepoint_find_by_name().

ahh. right. will do.

>> +#undef DECLARE_EVENT_CLASS
>> +#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print)	\
>> +/* no 'static' here. The bpf probe functions are global */		\
>> +notrace void								\
>
> I'm curious to why you have notrace here? Since it is separate from
> perf and ftrace, for debugging purposes, it could be useful to allow
> function tracing to this function.

To avoid unnecessary overhead. And I don't think it's useful to trace 
them. They're tiny jump functions of one or two instructions.
Really no point wasting mentry on them.


>> +static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
>> +{
>> +	struct bpf_raw_tracepoint *raw_tp;
>> +	struct tracepoint *tp;
>> +	struct bpf_prog *prog;
>> +	char tp_name[128];
>> +	int tp_fd, err;
>> +
>> +	if (strncpy_from_user(tp_name, u64_to_user_ptr(attr->raw_tracepoint.name),
>> +			      sizeof(tp_name) - 1) < 0)
>> +		return -EFAULT;
>> +	tp_name[sizeof(tp_name) - 1] = 0;
>> +
>> +	tp = kernel_tracepoint_find_by_name(tp_name);
>> +	if (!tp)
>> +		return -ENOENT;
>> +
>> +	raw_tp = kmalloc(sizeof(*raw_tp), GFP_USER | __GFP_ZERO);
>
> Please use kzalloc(), instead of open coding the "__GPF_ZERO"

right. will do

>
> Could you add some comments here to explain what the below is doing.

To write a proper comment I need to understand it and I don't.
That's the reason why I didn't put in in common header,
because it would require proper comment on what it is and
how one can use it.
I'm expecting Daniel to follow up on this.

>> +#define UNPACK(...)			__VA_ARGS__
>> +#define REPEAT_1(FN, DL, X, ...)	FN(X)
>> +#define REPEAT_2(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_1(FN, DL, __VA_ARGS__)
>> +#define REPEAT_3(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_2(FN, DL, __VA_ARGS__)
>> +#define REPEAT_4(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_3(FN, DL, __VA_ARGS__)
>> +#define REPEAT_5(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_4(FN, DL, __VA_ARGS__)
>> +#define REPEAT_6(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_5(FN, DL, __VA_ARGS__)
>> +#define REPEAT_7(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_6(FN, DL, __VA_ARGS__)
>> +#define REPEAT_8(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_7(FN, DL, __VA_ARGS__)
>> +#define REPEAT_9(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_8(FN, DL, __VA_ARGS__)
>> +#define REPEAT_10(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_9(FN, DL, __VA_ARGS__)
>> +#define REPEAT_11(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_10(FN, DL, __VA_ARGS__)
>> +#define REPEAT_12(FN, DL, X, ...)	FN(X) UNPACK DL REPEAT_11(FN, DL, __VA_ARGS__)
>> +#define REPEAT(X, FN, DL, ...)		REPEAT_##X(FN, DL, __VA_ARGS__)
>> +

>> +
>> +	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
>> +	addr = kallsyms_lookup_name(buf);
>> +	if (!addr)
>> +		return -ENOENT;
>> +
>> +	return tracepoint_probe_register(tp, (void *)addr, prog);
>
> You are putting in a hell of a lot of trust with kallsyms returning
> properly. I can see this being very fragile. This is calling a function
> based on the result of kallsyms. I'm sure the security folks would love
> this.
>
> There's a few things to make this a bit more robust. One is to add a
> table that points to all __bpf_trace_* functions, and verify that the
> result from kallsyms is in that table.
>
> Honestly, I think this is too much of a short cut and a hack. I know
> you want to keep it "simple" and save space, but you really should do
> it the same way ftrace and perf do it. That is, create a section and
> have all tracepoints create a structure that holds a pointer to the
> tracepoint and to the bpf probe function. Then you don't even need the
> kernel_tracepoint_find_by_name(), you just iterate over your table and
> you get the tracepoint and the bpf function associated to it.
>
> Relying on kallsyms to return an address to execute is just way too
> extreme and fragile for my liking.

Wasting extra 8bytes * number_of_tracepoints just for lack of trust
in kallsyms doesn't sound like good trade off to me.
If kallsyms are inaccurate all sorts of things will break:
kprobes, livepatch, etc.
I'd rather suggest for ftrace to use kallsyms approach as well
and reduce memory footprint.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
  2018-03-27 17:11       ` Steven Rostedt
@ 2018-03-27 18:58         ` Steven Rostedt
  -1 siblings, 0 replies; 57+ messages in thread
From: Steven Rostedt @ 2018-03-27 18:58 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers, Kees Cook

On Tue, 27 Mar 2018 13:11:43 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> On Tue, 27 Mar 2018 13:02:11 -0400
> Steven Rostedt <rostedt@goodmis.org> wrote:
> 
> > Honestly, I think this is too much of a short cut and a hack. I know
> > you want to keep it "simple" and save space, but you really should do
> > it the same way ftrace and perf do it. That is, create a section and
> > have all tracepoints create a structure that holds a pointer to the
> > tracepoint and to the bpf probe function. Then you don't even need the
> > kernel_tracepoint_find_by_name(), you just iterate over your table and
> > you get the tracepoint and the bpf function associated to it.  
> 
> Also, if you do it the perf/ftrace way, you get support for module
> tracepoints pretty much for free. Which would include tracepoints in
> networking code that is loaded by a module.

This doesn't include module code (but that wouldn't be too hard to set
up), but I compiled and booted this. I didn't test if it works (I
didn't have the way to test bpf here). But this patch applies on top of
this patch (patch 8). You can remove patch 7 and fold this into this
patch. And then you can also make the __bpf_trace_* function static.

This would be much more robust and less error prone.

-- Steve

diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index 1ab0e520d6fc..4fab7392e237 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -178,6 +178,15 @@
 #define TRACE_SYSCALLS()
 #endif
 
+#ifdef CONFIG_BPF_EVENTS
+#define BPF_RAW_TP() . = ALIGN(8);		\
+			 VMLINUX_SYMBOL(__start__bpf_raw_tp) = .;	\
+			 KEEP(*(__bpf_raw_tp_map))			\
+			 VMLINUX_SYMBOL(__stop__bpf_raw_tp) = .;
+#else
+#define BPF_RAW_TP()
+#endif
+
 #ifdef CONFIG_SERIAL_EARLYCON
 #define EARLYCON_TABLE() STRUCT_ALIGN();			\
 			 VMLINUX_SYMBOL(__earlycon_table) = .;	\
@@ -576,6 +585,7 @@
 	*(.init.rodata)							\
 	FTRACE_EVENTS()							\
 	TRACE_SYSCALLS()						\
+	BPF_RAW_TP()							\
 	KPROBE_BLACKLIST()						\
 	ERROR_INJECT_WHITELIST()					\
 	MEM_DISCARD(init.rodata)					\
diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 399ebe6f90cf..fb4778c0a248 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -470,8 +470,9 @@ unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx);
 int perf_event_attach_bpf_prog(struct perf_event *event, struct bpf_prog *prog);
 void perf_event_detach_bpf_prog(struct perf_event *event);
 int perf_event_query_prog_array(struct perf_event *event, void __user *info);
-int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog);
-int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog);
+int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *prog);
+int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_prog *prog);
+struct bpf_raw_event_map *bpf_find_raw_tracepoint(const char *name);
 #else
 static inline unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx)
 {
@@ -491,14 +492,18 @@ perf_event_query_prog_array(struct perf_event *event, void __user *info)
 {
 	return -EOPNOTSUPP;
 }
-static inline int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *p)
+static inline int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *p)
 {
 	return -EOPNOTSUPP;
 }
-static inline int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *p)
+static inline int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_prog *p)
 {
 	return -EOPNOTSUPP;
 }
+static inline struct bpf_raw_event_map *bpf_find_raw_tracepoint(const char *name)
+{
+	return NULL;
+}
 #endif
 
 enum {
diff --git a/include/linux/tracepoint-defs.h b/include/linux/tracepoint-defs.h
index 39a283c61c51..35db8dd48c4c 100644
--- a/include/linux/tracepoint-defs.h
+++ b/include/linux/tracepoint-defs.h
@@ -36,4 +36,9 @@ struct tracepoint {
 	u32 num_args;
 };
 
+struct bpf_raw_event_map {
+	struct tracepoint	*tp;
+	void			*bpf_func;
+};
+
 #endif
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index f100c63ff19e..6037a2f0108a 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1312,7 +1312,7 @@ static int bpf_obj_get(const union bpf_attr *attr)
 }
 
 struct bpf_raw_tracepoint {
-	struct tracepoint *tp;
+	struct bpf_raw_event_map *btp;
 	struct bpf_prog *prog;
 };
 
@@ -1321,7 +1321,7 @@ static int bpf_raw_tracepoint_release(struct inode *inode, struct file *filp)
 	struct bpf_raw_tracepoint *raw_tp = filp->private_data;
 
 	if (raw_tp->prog) {
-		bpf_probe_unregister(raw_tp->tp, raw_tp->prog);
+		bpf_probe_unregister(raw_tp->btp, raw_tp->prog);
 		bpf_prog_put(raw_tp->prog);
 	}
 	kfree(raw_tp);
@@ -1339,7 +1339,7 @@ static const struct file_operations bpf_raw_tp_fops = {
 static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
 {
 	struct bpf_raw_tracepoint *raw_tp;
-	struct tracepoint *tp;
+	struct bpf_raw_event_map *btp;
 	struct bpf_prog *prog;
 	char tp_name[128];
 	int tp_fd, err;
@@ -1349,14 +1349,14 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
 		return -EFAULT;
 	tp_name[sizeof(tp_name) - 1] = 0;
 
-	tp = kernel_tracepoint_find_by_name(tp_name);
-	if (!tp)
+	btp = bpf_find_raw_tracepoint(tp_name);
+	if (!btp)
 		return -ENOENT;
 
 	raw_tp = kmalloc(sizeof(*raw_tp), GFP_USER | __GFP_ZERO);
 	if (!raw_tp)
 		return -ENOMEM;
-	raw_tp->tp = tp;
+	raw_tp->btp = btp;
 
 	prog = bpf_prog_get_type(attr->raw_tracepoint.prog_fd,
 				 BPF_PROG_TYPE_RAW_TRACEPOINT);
@@ -1365,7 +1365,7 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
 		goto out_free_tp;
 	}
 
-	err = bpf_probe_register(raw_tp->tp, prog);
+	err = bpf_probe_register(raw_tp->btp, prog);
 	if (err)
 		goto out_put_prog;
 
@@ -1373,7 +1373,7 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
 	tp_fd = anon_inode_getfd("bpf-raw-tracepoint", &bpf_raw_tp_fops, raw_tp,
 				 O_CLOEXEC);
 	if (tp_fd < 0) {
-		bpf_probe_unregister(raw_tp->tp, prog);
+		bpf_probe_unregister(raw_tp->btp, prog);
 		err = tp_fd;
 		goto out_put_prog;
 	}
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index eb58ef156d36..e578b173fe1d 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -965,6 +965,19 @@ int perf_event_query_prog_array(struct perf_event *event, void __user *info)
 	return ret;
 }
 
+extern struct bpf_raw_event_map *__start__bpf_raw_tp[];
+extern struct bpf_raw_event_map *__stop__bpf_raw_tp[];
+
+struct bpf_raw_event_map *bpf_find_raw_tracepoint(const char *name)
+{
+	struct bpf_raw_event_map* const *btp = __start__bpf_raw_tp;
+
+	for (; btp < __stop__bpf_raw_tp; btp++)
+		if (!strcmp((*btp)->tp->name, name))
+			return *btp;
+	return NULL;
+}
+
 static __always_inline
 void __bpf_trace_run(struct bpf_prog *prog, u64 *args)
 {
@@ -1020,10 +1033,9 @@ BPF_TRACE_DEFN_x(10);
 BPF_TRACE_DEFN_x(11);
 BPF_TRACE_DEFN_x(12);
 
-static int __bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog)
+static int __bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *prog)
 {
-	unsigned long addr;
-	char buf[128];
+	struct tracepoint *tp = btp->tp;
 
 	/*
 	 * check that program doesn't access arguments beyond what's
@@ -1032,43 +1044,25 @@ static int __bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog)
 	if (prog->aux->max_ctx_offset > tp->num_args * sizeof(u64))
 		return -EINVAL;
 
-	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
-	addr = kallsyms_lookup_name(buf);
-	if (!addr)
-		return -ENOENT;
-
-	return tracepoint_probe_register(tp, (void *)addr, prog);
+	return tracepoint_probe_register(tp, (void *)btp->bpf_func, prog);
 }
 
-int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog)
+int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *prog)
 {
 	int err;
 
 	mutex_lock(&bpf_event_mutex);
-	err = __bpf_probe_register(tp, prog);
+	err = __bpf_probe_register(btp, prog);
 	mutex_unlock(&bpf_event_mutex);
 	return err;
 }
 
-static int __bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog)
-{
-	unsigned long addr;
-	char buf[128];
-
-	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
-	addr = kallsyms_lookup_name(buf);
-	if (!addr)
-		return -ENOENT;
-
-	return tracepoint_probe_unregister(tp, (void *)addr, prog);
-}
-
-int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog)
+int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_prog *prog)
 {
 	int err;
 
 	mutex_lock(&bpf_event_mutex);
-	err = __bpf_probe_unregister(tp, prog);
+	err = tracepoint_probe_unregister(btp->tp, (void *)btp->bpf_func, prog);
 	mutex_unlock(&bpf_event_mutex);
 	return err;
 }

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
@ 2018-03-27 18:58         ` Steven Rostedt
  0 siblings, 0 replies; 57+ messages in thread
From: Steven Rostedt @ 2018-03-27 18:58 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers, Kees Cook

On Tue, 27 Mar 2018 13:11:43 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> On Tue, 27 Mar 2018 13:02:11 -0400
> Steven Rostedt <rostedt@goodmis.org> wrote:
> 
> > Honestly, I think this is too much of a short cut and a hack. I know
> > you want to keep it "simple" and save space, but you really should do
> > it the same way ftrace and perf do it. That is, create a section and
> > have all tracepoints create a structure that holds a pointer to the
> > tracepoint and to the bpf probe function. Then you don't even need the
> > kernel_tracepoint_find_by_name(), you just iterate over your table and
> > you get the tracepoint and the bpf function associated to it.  
> 
> Also, if you do it the perf/ftrace way, you get support for module
> tracepoints pretty much for free. Which would include tracepoints in
> networking code that is loaded by a module.

This doesn't include module code (but that wouldn't be too hard to set
up), but I compiled and booted this. I didn't test if it works (I
didn't have the way to test bpf here). But this patch applies on top of
this patch (patch 8). You can remove patch 7 and fold this into this
patch. And then you can also make the __bpf_trace_* function static.

This would be much more robust and less error prone.

-- Steve

diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index 1ab0e520d6fc..4fab7392e237 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -178,6 +178,15 @@
 #define TRACE_SYSCALLS()
 #endif
 
+#ifdef CONFIG_BPF_EVENTS
+#define BPF_RAW_TP() . = ALIGN(8);		\
+			 VMLINUX_SYMBOL(__start__bpf_raw_tp) = .;	\
+			 KEEP(*(__bpf_raw_tp_map))			\
+			 VMLINUX_SYMBOL(__stop__bpf_raw_tp) = .;
+#else
+#define BPF_RAW_TP()
+#endif
+
 #ifdef CONFIG_SERIAL_EARLYCON
 #define EARLYCON_TABLE() STRUCT_ALIGN();			\
 			 VMLINUX_SYMBOL(__earlycon_table) = .;	\
@@ -576,6 +585,7 @@
 	*(.init.rodata)							\
 	FTRACE_EVENTS()							\
 	TRACE_SYSCALLS()						\
+	BPF_RAW_TP()							\
 	KPROBE_BLACKLIST()						\
 	ERROR_INJECT_WHITELIST()					\
 	MEM_DISCARD(init.rodata)					\
diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 399ebe6f90cf..fb4778c0a248 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -470,8 +470,9 @@ unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx);
 int perf_event_attach_bpf_prog(struct perf_event *event, struct bpf_prog *prog);
 void perf_event_detach_bpf_prog(struct perf_event *event);
 int perf_event_query_prog_array(struct perf_event *event, void __user *info);
-int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog);
-int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog);
+int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *prog);
+int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_prog *prog);
+struct bpf_raw_event_map *bpf_find_raw_tracepoint(const char *name);
 #else
 static inline unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx)
 {
@@ -491,14 +492,18 @@ perf_event_query_prog_array(struct perf_event *event, void __user *info)
 {
 	return -EOPNOTSUPP;
 }
-static inline int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *p)
+static inline int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *p)
 {
 	return -EOPNOTSUPP;
 }
-static inline int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *p)
+static inline int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_prog *p)
 {
 	return -EOPNOTSUPP;
 }
+static inline struct bpf_raw_event_map *bpf_find_raw_tracepoint(const char *name)
+{
+	return NULL;
+}
 #endif
 
 enum {
diff --git a/include/linux/tracepoint-defs.h b/include/linux/tracepoint-defs.h
index 39a283c61c51..35db8dd48c4c 100644
--- a/include/linux/tracepoint-defs.h
+++ b/include/linux/tracepoint-defs.h
@@ -36,4 +36,9 @@ struct tracepoint {
 	u32 num_args;
 };
 
+struct bpf_raw_event_map {
+	struct tracepoint	*tp;
+	void			*bpf_func;
+};
+
 #endif
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index f100c63ff19e..6037a2f0108a 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1312,7 +1312,7 @@ static int bpf_obj_get(const union bpf_attr *attr)
 }
 
 struct bpf_raw_tracepoint {
-	struct tracepoint *tp;
+	struct bpf_raw_event_map *btp;
 	struct bpf_prog *prog;
 };
 
@@ -1321,7 +1321,7 @@ static int bpf_raw_tracepoint_release(struct inode *inode, struct file *filp)
 	struct bpf_raw_tracepoint *raw_tp = filp->private_data;
 
 	if (raw_tp->prog) {
-		bpf_probe_unregister(raw_tp->tp, raw_tp->prog);
+		bpf_probe_unregister(raw_tp->btp, raw_tp->prog);
 		bpf_prog_put(raw_tp->prog);
 	}
 	kfree(raw_tp);
@@ -1339,7 +1339,7 @@ static const struct file_operations bpf_raw_tp_fops = {
 static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
 {
 	struct bpf_raw_tracepoint *raw_tp;
-	struct tracepoint *tp;
+	struct bpf_raw_event_map *btp;
 	struct bpf_prog *prog;
 	char tp_name[128];
 	int tp_fd, err;
@@ -1349,14 +1349,14 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
 		return -EFAULT;
 	tp_name[sizeof(tp_name) - 1] = 0;
 
-	tp = kernel_tracepoint_find_by_name(tp_name);
-	if (!tp)
+	btp = bpf_find_raw_tracepoint(tp_name);
+	if (!btp)
 		return -ENOENT;
 
 	raw_tp = kmalloc(sizeof(*raw_tp), GFP_USER | __GFP_ZERO);
 	if (!raw_tp)
 		return -ENOMEM;
-	raw_tp->tp = tp;
+	raw_tp->btp = btp;
 
 	prog = bpf_prog_get_type(attr->raw_tracepoint.prog_fd,
 				 BPF_PROG_TYPE_RAW_TRACEPOINT);
@@ -1365,7 +1365,7 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
 		goto out_free_tp;
 	}
 
-	err = bpf_probe_register(raw_tp->tp, prog);
+	err = bpf_probe_register(raw_tp->btp, prog);
 	if (err)
 		goto out_put_prog;
 
@@ -1373,7 +1373,7 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
 	tp_fd = anon_inode_getfd("bpf-raw-tracepoint", &bpf_raw_tp_fops, raw_tp,
 				 O_CLOEXEC);
 	if (tp_fd < 0) {
-		bpf_probe_unregister(raw_tp->tp, prog);
+		bpf_probe_unregister(raw_tp->btp, prog);
 		err = tp_fd;
 		goto out_put_prog;
 	}
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index eb58ef156d36..e578b173fe1d 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -965,6 +965,19 @@ int perf_event_query_prog_array(struct perf_event *event, void __user *info)
 	return ret;
 }
 
+extern struct bpf_raw_event_map *__start__bpf_raw_tp[];
+extern struct bpf_raw_event_map *__stop__bpf_raw_tp[];
+
+struct bpf_raw_event_map *bpf_find_raw_tracepoint(const char *name)
+{
+	struct bpf_raw_event_map* const *btp = __start__bpf_raw_tp;
+
+	for (; btp < __stop__bpf_raw_tp; btp++)
+		if (!strcmp((*btp)->tp->name, name))
+			return *btp;
+	return NULL;
+}
+
 static __always_inline
 void __bpf_trace_run(struct bpf_prog *prog, u64 *args)
 {
@@ -1020,10 +1033,9 @@ BPF_TRACE_DEFN_x(10);
 BPF_TRACE_DEFN_x(11);
 BPF_TRACE_DEFN_x(12);
 
-static int __bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog)
+static int __bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *prog)
 {
-	unsigned long addr;
-	char buf[128];
+	struct tracepoint *tp = btp->tp;
 
 	/*
 	 * check that program doesn't access arguments beyond what's
@@ -1032,43 +1044,25 @@ static int __bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog)
 	if (prog->aux->max_ctx_offset > tp->num_args * sizeof(u64))
 		return -EINVAL;
 
-	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
-	addr = kallsyms_lookup_name(buf);
-	if (!addr)
-		return -ENOENT;
-
-	return tracepoint_probe_register(tp, (void *)addr, prog);
+	return tracepoint_probe_register(tp, (void *)btp->bpf_func, prog);
 }
 
-int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog)
+int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *prog)
 {
 	int err;
 
 	mutex_lock(&bpf_event_mutex);
-	err = __bpf_probe_register(tp, prog);
+	err = __bpf_probe_register(btp, prog);
 	mutex_unlock(&bpf_event_mutex);
 	return err;
 }
 
-static int __bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog)
-{
-	unsigned long addr;
-	char buf[128];
-
-	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
-	addr = kallsyms_lookup_name(buf);
-	if (!addr)
-		return -ENOENT;
-
-	return tracepoint_probe_unregister(tp, (void *)addr, prog);
-}
-
-int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog)
+int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_prog *prog)
 {
 	int err;
 
 	mutex_lock(&bpf_event_mutex);
-	err = __bpf_probe_unregister(tp, prog);
+	err = tracepoint_probe_unregister(btp->tp, (void *)btp->bpf_func, prog);
 	mutex_unlock(&bpf_event_mutex);
 	return err;
 }

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
  2018-03-27 18:45       ` Alexei Starovoitov
@ 2018-03-27 19:00         ` Steven Rostedt
  -1 siblings, 0 replies; 57+ messages in thread
From: Steven Rostedt @ 2018-03-27 19:00 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers, Kees Cook, Thomas Gleixner, Ingo Molnar

On Tue, 27 Mar 2018 11:45:34 -0700
Alexei Starovoitov <ast@fb.com> wrote:

> >> +
> >> +	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
> >> +	addr = kallsyms_lookup_name(buf);
> >> +	if (!addr)
> >> +		return -ENOENT;
> >> +
> >> +	return tracepoint_probe_register(tp, (void *)addr, prog);  
> >
> > You are putting in a hell of a lot of trust with kallsyms returning
> > properly. I can see this being very fragile. This is calling a function
> > based on the result of kallsyms. I'm sure the security folks would love
> > this.
> >
> > There's a few things to make this a bit more robust. One is to add a
> > table that points to all __bpf_trace_* functions, and verify that the
> > result from kallsyms is in that table.
> >
> > Honestly, I think this is too much of a short cut and a hack. I know
> > you want to keep it "simple" and save space, but you really should do
> > it the same way ftrace and perf do it. That is, create a section and
> > have all tracepoints create a structure that holds a pointer to the
> > tracepoint and to the bpf probe function. Then you don't even need the
> > kernel_tracepoint_find_by_name(), you just iterate over your table and
> > you get the tracepoint and the bpf function associated to it.
> >
> > Relying on kallsyms to return an address to execute is just way too
> > extreme and fragile for my liking.  
> 
> Wasting extra 8bytes * number_of_tracepoints just for lack of trust
> in kallsyms doesn't sound like good trade off to me.
> If kallsyms are inaccurate all sorts of things will break:
> kprobes, livepatch, etc.
> I'd rather suggest for ftrace to use kallsyms approach as well
> and reduce memory footprint.

If Linus, Thomas, Peter, Ingo, and the security folks trust kallsyms to
return a valid function pointer from a name, then sure, we can try
going that way.

-- Steve

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
@ 2018-03-27 19:00         ` Steven Rostedt
  0 siblings, 0 replies; 57+ messages in thread
From: Steven Rostedt @ 2018-03-27 19:00 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers, Kees Cook, Thomas Gleixner, Ingo Molnar

On Tue, 27 Mar 2018 11:45:34 -0700
Alexei Starovoitov <ast@fb.com> wrote:

> >> +
> >> +	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
> >> +	addr = kallsyms_lookup_name(buf);
> >> +	if (!addr)
> >> +		return -ENOENT;
> >> +
> >> +	return tracepoint_probe_register(tp, (void *)addr, prog);  
> >
> > You are putting in a hell of a lot of trust with kallsyms returning
> > properly. I can see this being very fragile. This is calling a function
> > based on the result of kallsyms. I'm sure the security folks would love
> > this.
> >
> > There's a few things to make this a bit more robust. One is to add a
> > table that points to all __bpf_trace_* functions, and verify that the
> > result from kallsyms is in that table.
> >
> > Honestly, I think this is too much of a short cut and a hack. I know
> > you want to keep it "simple" and save space, but you really should do
> > it the same way ftrace and perf do it. That is, create a section and
> > have all tracepoints create a structure that holds a pointer to the
> > tracepoint and to the bpf probe function. Then you don't even need the
> > kernel_tracepoint_find_by_name(), you just iterate over your table and
> > you get the tracepoint and the bpf function associated to it.
> >
> > Relying on kallsyms to return an address to execute is just way too
> > extreme and fragile for my liking.  
> 
> Wasting extra 8bytes * number_of_tracepoints just for lack of trust
> in kallsyms doesn't sound like good trade off to me.
> If kallsyms are inaccurate all sorts of things will break:
> kprobes, livepatch, etc.
> I'd rather suggest for ftrace to use kallsyms approach as well
> and reduce memory footprint.

If Linus, Thomas, Peter, Ingo, and the security folks trust kallsyms to
return a valid function pointer from a name, then sure, we can try
going that way.

-- Steve

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
  2018-03-27 19:00         ` Steven Rostedt
@ 2018-03-27 19:07           ` Steven Rostedt
  -1 siblings, 0 replies; 57+ messages in thread
From: Steven Rostedt @ 2018-03-27 19:07 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers, Kees Cook, Thomas Gleixner, Ingo Molnar

On Tue, 27 Mar 2018 15:00:41 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> >  Wasting extra 8bytes * number_of_tracepoints just for lack of trust
> > in kallsyms doesn't sound like good trade off to me.
> > If kallsyms are inaccurate all sorts of things will break:
> > kprobes, livepatch, etc.

And if kallsyms breaks, these will break by failing to attach, or some
other benign error. Ftrace uses kallsyms to find functions too, but it
only enables functions based on the result, it doesn't use the result
for anything but to compare it to what it already knows.

This is a first that I know of to trust that kallsyms returns something
that you expect to execute with no other validation. It may be valid,
but it also makes me very nervous too. If others are fine with such an
approach, then OK, we can enter a new chapter of development where we
can use kallsyms to find the functions we want to call and use it.

-- Steve

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
@ 2018-03-27 19:07           ` Steven Rostedt
  0 siblings, 0 replies; 57+ messages in thread
From: Steven Rostedt @ 2018-03-27 19:07 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers, Kees Cook, Thomas Gleixner, Ingo Molnar

On Tue, 27 Mar 2018 15:00:41 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> >  Wasting extra 8bytes * number_of_tracepoints just for lack of trust
> > in kallsyms doesn't sound like good trade off to me.
> > If kallsyms are inaccurate all sorts of things will break:
> > kprobes, livepatch, etc.

And if kallsyms breaks, these will break by failing to attach, or some
other benign error. Ftrace uses kallsyms to find functions too, but it
only enables functions based on the result, it doesn't use the result
for anything but to compare it to what it already knows.

This is a first that I know of to trust that kallsyms returns something
that you expect to execute with no other validation. It may be valid,
but it also makes me very nervous too. If others are fine with such an
approach, then OK, we can enter a new chapter of development where we
can use kallsyms to find the functions we want to call and use it.

-- Steve

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
  2018-03-27 19:00         ` Steven Rostedt
@ 2018-03-27 19:10           ` Steven Rostedt
  -1 siblings, 0 replies; 57+ messages in thread
From: Steven Rostedt @ 2018-03-27 19:10 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers, Kees Cook, Thomas Gleixner, Ingo Molnar,
	Andrew Morton


[ Added Andrew Morton too ]

On Tue, 27 Mar 2018 15:00:41 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> On Tue, 27 Mar 2018 11:45:34 -0700
> Alexei Starovoitov <ast@fb.com> wrote:
> 
> > >> +
> > >> +	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
> > >> +	addr = kallsyms_lookup_name(buf);
> > >> +	if (!addr)
> > >> +		return -ENOENT;
> > >> +
> > >> +	return tracepoint_probe_register(tp, (void *)addr, prog);    
> > >
> > > You are putting in a hell of a lot of trust with kallsyms returning
> > > properly. I can see this being very fragile. This is calling a function
> > > based on the result of kallsyms. I'm sure the security folks would love
> > > this.
> > >
> > > There's a few things to make this a bit more robust. One is to add a
> > > table that points to all __bpf_trace_* functions, and verify that the
> > > result from kallsyms is in that table.
> > >
> > > Honestly, I think this is too much of a short cut and a hack. I know
> > > you want to keep it "simple" and save space, but you really should do
> > > it the same way ftrace and perf do it. That is, create a section and
> > > have all tracepoints create a structure that holds a pointer to the
> > > tracepoint and to the bpf probe function. Then you don't even need the
> > > kernel_tracepoint_find_by_name(), you just iterate over your table and
> > > you get the tracepoint and the bpf function associated to it.
> > >
> > > Relying on kallsyms to return an address to execute is just way too
> > > extreme and fragile for my liking.    
> > 
> > Wasting extra 8bytes * number_of_tracepoints just for lack of trust
> > in kallsyms doesn't sound like good trade off to me.
> > If kallsyms are inaccurate all sorts of things will break:
> > kprobes, livepatch, etc.
> > I'd rather suggest for ftrace to use kallsyms approach as well
> > and reduce memory footprint.  
> 
> If Linus, Thomas, Peter, Ingo, and the security folks trust kallsyms to
> return a valid function pointer from a name, then sure, we can try
> going that way.

I would like an ack from Linus and/or Andrew before we go further down
this road.

-- Steve

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
@ 2018-03-27 19:10           ` Steven Rostedt
  0 siblings, 0 replies; 57+ messages in thread
From: Steven Rostedt @ 2018-03-27 19:10 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers, Kees Cook, Thomas Gleixner, Ingo Molnar,
	Andrew Morton


[ Added Andrew Morton too ]

On Tue, 27 Mar 2018 15:00:41 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> On Tue, 27 Mar 2018 11:45:34 -0700
> Alexei Starovoitov <ast@fb.com> wrote:
> 
> > >> +
> > >> +	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
> > >> +	addr = kallsyms_lookup_name(buf);
> > >> +	if (!addr)
> > >> +		return -ENOENT;
> > >> +
> > >> +	return tracepoint_probe_register(tp, (void *)addr, prog);    
> > >
> > > You are putting in a hell of a lot of trust with kallsyms returning
> > > properly. I can see this being very fragile. This is calling a function
> > > based on the result of kallsyms. I'm sure the security folks would love
> > > this.
> > >
> > > There's a few things to make this a bit more robust. One is to add a
> > > table that points to all __bpf_trace_* functions, and verify that the
> > > result from kallsyms is in that table.
> > >
> > > Honestly, I think this is too much of a short cut and a hack. I know
> > > you want to keep it "simple" and save space, but you really should do
> > > it the same way ftrace and perf do it. That is, create a section and
> > > have all tracepoints create a structure that holds a pointer to the
> > > tracepoint and to the bpf probe function. Then you don't even need the
> > > kernel_tracepoint_find_by_name(), you just iterate over your table and
> > > you get the tracepoint and the bpf function associated to it.
> > >
> > > Relying on kallsyms to return an address to execute is just way too
> > > extreme and fragile for my liking.    
> > 
> > Wasting extra 8bytes * number_of_tracepoints just for lack of trust
> > in kallsyms doesn't sound like good trade off to me.
> > If kallsyms are inaccurate all sorts of things will break:
> > kprobes, livepatch, etc.
> > I'd rather suggest for ftrace to use kallsyms approach as well
> > and reduce memory footprint.  
> 
> If Linus, Thomas, Peter, Ingo, and the security folks trust kallsyms to
> return a valid function pointer from a name, then sure, we can try
> going that way.

I would like an ack from Linus and/or Andrew before we go further down
this road.

-- Steve

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
  2018-03-27 19:00         ` Steven Rostedt
                           ` (2 preceding siblings ...)
  (?)
@ 2018-03-27 19:10         ` Mathieu Desnoyers
  -1 siblings, 0 replies; 57+ messages in thread
From: Mathieu Desnoyers @ 2018-03-27 19:10 UTC (permalink / raw)
  To: rostedt
  Cc: Alexei Starovoitov, David S. Miller, Daniel Borkmann,
	Linus Torvalds, Peter Zijlstra, netdev, kernel-team, linux-api,
	Kees Cook, Thomas Gleixner, Ingo Molnar

----- On Mar 27, 2018, at 3:00 PM, rostedt rostedt@goodmis.org wrote:

> On Tue, 27 Mar 2018 11:45:34 -0700
> Alexei Starovoitov <ast@fb.com> wrote:
> 
>> >> +
>> >> +	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
>> >> +	addr = kallsyms_lookup_name(buf);
>> >> +	if (!addr)
>> >> +		return -ENOENT;
>> >> +
>> >> +	return tracepoint_probe_register(tp, (void *)addr, prog);
>> >
>> > You are putting in a hell of a lot of trust with kallsyms returning
>> > properly. I can see this being very fragile. This is calling a function
>> > based on the result of kallsyms. I'm sure the security folks would love
>> > this.
>> >
>> > There's a few things to make this a bit more robust. One is to add a
>> > table that points to all __bpf_trace_* functions, and verify that the
>> > result from kallsyms is in that table.
>> >
>> > Honestly, I think this is too much of a short cut and a hack. I know
>> > you want to keep it "simple" and save space, but you really should do
>> > it the same way ftrace and perf do it. That is, create a section and
>> > have all tracepoints create a structure that holds a pointer to the
>> > tracepoint and to the bpf probe function. Then you don't even need the
>> > kernel_tracepoint_find_by_name(), you just iterate over your table and
>> > you get the tracepoint and the bpf function associated to it.
>> >
>> > Relying on kallsyms to return an address to execute is just way too
>> > extreme and fragile for my liking.
>> 
>> Wasting extra 8bytes * number_of_tracepoints just for lack of trust
>> in kallsyms doesn't sound like good trade off to me.
>> If kallsyms are inaccurate all sorts of things will break:
>> kprobes, livepatch, etc.
>> I'd rather suggest for ftrace to use kallsyms approach as well
>> and reduce memory footprint.
> 
> If Linus, Thomas, Peter, Ingo, and the security folks trust kallsyms to
> return a valid function pointer from a name, then sure, we can try
> going that way.

This will crash on ARM Thumb2 kernels. Also, how is this expected to
work on PowerPC ABIv1 without KALLSYMS_ALL ?

Thanks,

Mathieu

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
  2018-03-27 18:58         ` Steven Rostedt
@ 2018-03-27 21:04           ` Steven Rostedt
  -1 siblings, 0 replies; 57+ messages in thread
From: Steven Rostedt @ 2018-03-27 21:04 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers, Kees Cook

On Tue, 27 Mar 2018 14:58:24 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> +extern struct bpf_raw_event_map *__start__bpf_raw_tp[];
> +extern struct bpf_raw_event_map *__stop__bpf_raw_tp[];
> +
> +struct bpf_raw_event_map *bpf_find_raw_tracepoint(const char *name)
> +{
> +	struct bpf_raw_event_map* const *btp = __start__bpf_raw_tp;
> +
> +	for (; btp < __stop__bpf_raw_tp; btp++)
> +		if (!strcmp((*btp)->tp->name, name))
> +			return *btp;
> +	return NULL;
> +}
> +

OK, this part is broken, and for some reason it didn't include my
changes to bpf_probe.h. I also tested this without setting BPF_EVENTS,
so I wasn't actually testing it.

I added a test in event_trace_init() to make sure that it worked:
(Not included in the patch below)

{
	struct bpf_raw_event_map *btp;
	btp = bpf_find_raw_tracepoint("sched_switch");
	if (btp)
		printk("found BPF_RAW_TRACEPOINT: %s %pS\n",
		       btp->tp->name, btp->bpf_func);
	else
		printk("COULD NOT FIND BPF_RAW_TRACEPOINT\n");
}

And it found the tracepoint.

Here's take two....

You can add my: Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

-- Steve

diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index 1ab0e520d6fc..4fab7392e237 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -178,6 +178,15 @@
 #define TRACE_SYSCALLS()
 #endif
 
+#ifdef CONFIG_BPF_EVENTS
+#define BPF_RAW_TP() . = ALIGN(8);		\
+			 VMLINUX_SYMBOL(__start__bpf_raw_tp) = .;	\
+			 KEEP(*(__bpf_raw_tp_map))			\
+			 VMLINUX_SYMBOL(__stop__bpf_raw_tp) = .;
+#else
+#define BPF_RAW_TP()
+#endif
+
 #ifdef CONFIG_SERIAL_EARLYCON
 #define EARLYCON_TABLE() STRUCT_ALIGN();			\
 			 VMLINUX_SYMBOL(__earlycon_table) = .;	\
@@ -576,6 +585,7 @@
 	*(.init.rodata)							\
 	FTRACE_EVENTS()							\
 	TRACE_SYSCALLS()						\
+	BPF_RAW_TP()							\
 	KPROBE_BLACKLIST()						\
 	ERROR_INJECT_WHITELIST()					\
 	MEM_DISCARD(init.rodata)					\
diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 399ebe6f90cf..fb4778c0a248 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -470,8 +470,9 @@ unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx);
 int perf_event_attach_bpf_prog(struct perf_event *event, struct bpf_prog *prog);
 void perf_event_detach_bpf_prog(struct perf_event *event);
 int perf_event_query_prog_array(struct perf_event *event, void __user *info);
-int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog);
-int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog);
+int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *prog);
+int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_prog *prog);
+struct bpf_raw_event_map *bpf_find_raw_tracepoint(const char *name);
 #else
 static inline unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx)
 {
@@ -491,14 +492,18 @@ perf_event_query_prog_array(struct perf_event *event, void __user *info)
 {
 	return -EOPNOTSUPP;
 }
-static inline int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *p)
+static inline int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *p)
 {
 	return -EOPNOTSUPP;
 }
-static inline int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *p)
+static inline int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_prog *p)
 {
 	return -EOPNOTSUPP;
 }
+static inline struct bpf_raw_event_map *bpf_find_raw_tracepoint(const char *name)
+{
+	return NULL;
+}
 #endif
 
 enum {
diff --git a/include/linux/tracepoint-defs.h b/include/linux/tracepoint-defs.h
index 39a283c61c51..35db8dd48c4c 100644
--- a/include/linux/tracepoint-defs.h
+++ b/include/linux/tracepoint-defs.h
@@ -36,4 +36,9 @@ struct tracepoint {
 	u32 num_args;
 };
 
+struct bpf_raw_event_map {
+	struct tracepoint	*tp;
+	void			*bpf_func;
+};
+
 #endif
diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h
index d2cc0663e618..bb8ed2f530ad 100644
--- a/include/trace/bpf_probe.h
+++ b/include/trace/bpf_probe.h
@@ -76,7 +76,13 @@ __bpf_trace_##call(void *__data, proto)					\
 static inline void bpf_test_probe_##call(void)				\
 {									\
 	check_trace_callback_type_##call(__bpf_trace_##template);	\
-}
+}									\
+static struct bpf_raw_event_map	__used					\
+   __attribute__((section("__bpf_raw_tp_map")))				\
+__bpf_trace_tp_map_##call= {						\
+	.tp		= &__tracepoint_##call,				\
+	.bpf_func	= (void *)__bpf_trace_##template,		\
+};
 
 
 #undef DEFINE_EVENT_PRINT
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index f100c63ff19e..6037a2f0108a 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1312,7 +1312,7 @@ static int bpf_obj_get(const union bpf_attr *attr)
 }
 
 struct bpf_raw_tracepoint {
-	struct tracepoint *tp;
+	struct bpf_raw_event_map *btp;
 	struct bpf_prog *prog;
 };
 
@@ -1321,7 +1321,7 @@ static int bpf_raw_tracepoint_release(struct inode *inode, struct file *filp)
 	struct bpf_raw_tracepoint *raw_tp = filp->private_data;
 
 	if (raw_tp->prog) {
-		bpf_probe_unregister(raw_tp->tp, raw_tp->prog);
+		bpf_probe_unregister(raw_tp->btp, raw_tp->prog);
 		bpf_prog_put(raw_tp->prog);
 	}
 	kfree(raw_tp);
@@ -1339,7 +1339,7 @@ static const struct file_operations bpf_raw_tp_fops = {
 static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
 {
 	struct bpf_raw_tracepoint *raw_tp;
-	struct tracepoint *tp;
+	struct bpf_raw_event_map *btp;
 	struct bpf_prog *prog;
 	char tp_name[128];
 	int tp_fd, err;
@@ -1349,14 +1349,14 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
 		return -EFAULT;
 	tp_name[sizeof(tp_name) - 1] = 0;
 
-	tp = kernel_tracepoint_find_by_name(tp_name);
-	if (!tp)
+	btp = bpf_find_raw_tracepoint(tp_name);
+	if (!btp)
 		return -ENOENT;
 
 	raw_tp = kmalloc(sizeof(*raw_tp), GFP_USER | __GFP_ZERO);
 	if (!raw_tp)
 		return -ENOMEM;
-	raw_tp->tp = tp;
+	raw_tp->btp = btp;
 
 	prog = bpf_prog_get_type(attr->raw_tracepoint.prog_fd,
 				 BPF_PROG_TYPE_RAW_TRACEPOINT);
@@ -1365,7 +1365,7 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
 		goto out_free_tp;
 	}
 
-	err = bpf_probe_register(raw_tp->tp, prog);
+	err = bpf_probe_register(raw_tp->btp, prog);
 	if (err)
 		goto out_put_prog;
 
@@ -1373,7 +1373,7 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
 	tp_fd = anon_inode_getfd("bpf-raw-tracepoint", &bpf_raw_tp_fops, raw_tp,
 				 O_CLOEXEC);
 	if (tp_fd < 0) {
-		bpf_probe_unregister(raw_tp->tp, prog);
+		bpf_probe_unregister(raw_tp->btp, prog);
 		err = tp_fd;
 		goto out_put_prog;
 	}
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index eb58ef156d36..d0975094cff7 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -965,6 +965,22 @@ int perf_event_query_prog_array(struct perf_event *event, void __user *info)
 	return ret;
 }
 
+extern struct bpf_raw_event_map __start__bpf_raw_tp;
+extern struct bpf_raw_event_map __stop__bpf_raw_tp;
+
+struct bpf_raw_event_map *bpf_find_raw_tracepoint(const char *name)
+{
+	const struct bpf_raw_event_map *btp = &__start__bpf_raw_tp;
+	int i = 0;
+
+	for (; btp < &__stop__bpf_raw_tp; btp++) {
+		i++;
+		if (!strcmp(btp->tp->name, name))
+			return btp;
+	}
+	return NULL;
+}
+
 static __always_inline
 void __bpf_trace_run(struct bpf_prog *prog, u64 *args)
 {
@@ -1020,10 +1036,9 @@ BPF_TRACE_DEFN_x(10);
 BPF_TRACE_DEFN_x(11);
 BPF_TRACE_DEFN_x(12);
 
-static int __bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog)
+static int __bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *prog)
 {
-	unsigned long addr;
-	char buf[128];
+	struct tracepoint *tp = btp->tp;
 
 	/*
 	 * check that program doesn't access arguments beyond what's
@@ -1032,43 +1047,25 @@ static int __bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog)
 	if (prog->aux->max_ctx_offset > tp->num_args * sizeof(u64))
 		return -EINVAL;
 
-	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
-	addr = kallsyms_lookup_name(buf);
-	if (!addr)
-		return -ENOENT;
-
-	return tracepoint_probe_register(tp, (void *)addr, prog);
+	return tracepoint_probe_register(tp, (void *)btp->bpf_func, prog);
 }
 
-int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog)
+int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *prog)
 {
 	int err;
 
 	mutex_lock(&bpf_event_mutex);
-	err = __bpf_probe_register(tp, prog);
+	err = __bpf_probe_register(btp, prog);
 	mutex_unlock(&bpf_event_mutex);
 	return err;
 }
 
-static int __bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog)
-{
-	unsigned long addr;
-	char buf[128];
-
-	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
-	addr = kallsyms_lookup_name(buf);
-	if (!addr)
-		return -ENOENT;
-
-	return tracepoint_probe_unregister(tp, (void *)addr, prog);
-}
-
-int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog)
+int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_prog *prog)
 {
 	int err;
 
 	mutex_lock(&bpf_event_mutex);
-	err = __bpf_probe_unregister(tp, prog);
+	err = tracepoint_probe_unregister(btp->tp, (void *)btp->bpf_func, prog);
 	mutex_unlock(&bpf_event_mutex);
 	return err;
 }

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
@ 2018-03-27 21:04           ` Steven Rostedt
  0 siblings, 0 replies; 57+ messages in thread
From: Steven Rostedt @ 2018-03-27 21:04 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers, Kees Cook

On Tue, 27 Mar 2018 14:58:24 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> +extern struct bpf_raw_event_map *__start__bpf_raw_tp[];
> +extern struct bpf_raw_event_map *__stop__bpf_raw_tp[];
> +
> +struct bpf_raw_event_map *bpf_find_raw_tracepoint(const char *name)
> +{
> +	struct bpf_raw_event_map* const *btp = __start__bpf_raw_tp;
> +
> +	for (; btp < __stop__bpf_raw_tp; btp++)
> +		if (!strcmp((*btp)->tp->name, name))
> +			return *btp;
> +	return NULL;
> +}
> +

OK, this part is broken, and for some reason it didn't include my
changes to bpf_probe.h. I also tested this without setting BPF_EVENTS,
so I wasn't actually testing it.

I added a test in event_trace_init() to make sure that it worked:
(Not included in the patch below)

{
	struct bpf_raw_event_map *btp;
	btp = bpf_find_raw_tracepoint("sched_switch");
	if (btp)
		printk("found BPF_RAW_TRACEPOINT: %s %pS\n",
		       btp->tp->name, btp->bpf_func);
	else
		printk("COULD NOT FIND BPF_RAW_TRACEPOINT\n");
}

And it found the tracepoint.

Here's take two....

You can add my: Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

-- Steve

diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index 1ab0e520d6fc..4fab7392e237 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -178,6 +178,15 @@
 #define TRACE_SYSCALLS()
 #endif
 
+#ifdef CONFIG_BPF_EVENTS
+#define BPF_RAW_TP() . = ALIGN(8);		\
+			 VMLINUX_SYMBOL(__start__bpf_raw_tp) = .;	\
+			 KEEP(*(__bpf_raw_tp_map))			\
+			 VMLINUX_SYMBOL(__stop__bpf_raw_tp) = .;
+#else
+#define BPF_RAW_TP()
+#endif
+
 #ifdef CONFIG_SERIAL_EARLYCON
 #define EARLYCON_TABLE() STRUCT_ALIGN();			\
 			 VMLINUX_SYMBOL(__earlycon_table) = .;	\
@@ -576,6 +585,7 @@
 	*(.init.rodata)							\
 	FTRACE_EVENTS()							\
 	TRACE_SYSCALLS()						\
+	BPF_RAW_TP()							\
 	KPROBE_BLACKLIST()						\
 	ERROR_INJECT_WHITELIST()					\
 	MEM_DISCARD(init.rodata)					\
diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 399ebe6f90cf..fb4778c0a248 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -470,8 +470,9 @@ unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx);
 int perf_event_attach_bpf_prog(struct perf_event *event, struct bpf_prog *prog);
 void perf_event_detach_bpf_prog(struct perf_event *event);
 int perf_event_query_prog_array(struct perf_event *event, void __user *info);
-int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog);
-int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog);
+int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *prog);
+int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_prog *prog);
+struct bpf_raw_event_map *bpf_find_raw_tracepoint(const char *name);
 #else
 static inline unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx)
 {
@@ -491,14 +492,18 @@ perf_event_query_prog_array(struct perf_event *event, void __user *info)
 {
 	return -EOPNOTSUPP;
 }
-static inline int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *p)
+static inline int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *p)
 {
 	return -EOPNOTSUPP;
 }
-static inline int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *p)
+static inline int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_prog *p)
 {
 	return -EOPNOTSUPP;
 }
+static inline struct bpf_raw_event_map *bpf_find_raw_tracepoint(const char *name)
+{
+	return NULL;
+}
 #endif
 
 enum {
diff --git a/include/linux/tracepoint-defs.h b/include/linux/tracepoint-defs.h
index 39a283c61c51..35db8dd48c4c 100644
--- a/include/linux/tracepoint-defs.h
+++ b/include/linux/tracepoint-defs.h
@@ -36,4 +36,9 @@ struct tracepoint {
 	u32 num_args;
 };
 
+struct bpf_raw_event_map {
+	struct tracepoint	*tp;
+	void			*bpf_func;
+};
+
 #endif
diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h
index d2cc0663e618..bb8ed2f530ad 100644
--- a/include/trace/bpf_probe.h
+++ b/include/trace/bpf_probe.h
@@ -76,7 +76,13 @@ __bpf_trace_##call(void *__data, proto)					\
 static inline void bpf_test_probe_##call(void)				\
 {									\
 	check_trace_callback_type_##call(__bpf_trace_##template);	\
-}
+}									\
+static struct bpf_raw_event_map	__used					\
+   __attribute__((section("__bpf_raw_tp_map")))				\
+__bpf_trace_tp_map_##call= {						\
+	.tp		= &__tracepoint_##call,				\
+	.bpf_func	= (void *)__bpf_trace_##template,		\
+};
 
 
 #undef DEFINE_EVENT_PRINT
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index f100c63ff19e..6037a2f0108a 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1312,7 +1312,7 @@ static int bpf_obj_get(const union bpf_attr *attr)
 }
 
 struct bpf_raw_tracepoint {
-	struct tracepoint *tp;
+	struct bpf_raw_event_map *btp;
 	struct bpf_prog *prog;
 };
 
@@ -1321,7 +1321,7 @@ static int bpf_raw_tracepoint_release(struct inode *inode, struct file *filp)
 	struct bpf_raw_tracepoint *raw_tp = filp->private_data;
 
 	if (raw_tp->prog) {
-		bpf_probe_unregister(raw_tp->tp, raw_tp->prog);
+		bpf_probe_unregister(raw_tp->btp, raw_tp->prog);
 		bpf_prog_put(raw_tp->prog);
 	}
 	kfree(raw_tp);
@@ -1339,7 +1339,7 @@ static const struct file_operations bpf_raw_tp_fops = {
 static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
 {
 	struct bpf_raw_tracepoint *raw_tp;
-	struct tracepoint *tp;
+	struct bpf_raw_event_map *btp;
 	struct bpf_prog *prog;
 	char tp_name[128];
 	int tp_fd, err;
@@ -1349,14 +1349,14 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
 		return -EFAULT;
 	tp_name[sizeof(tp_name) - 1] = 0;
 
-	tp = kernel_tracepoint_find_by_name(tp_name);
-	if (!tp)
+	btp = bpf_find_raw_tracepoint(tp_name);
+	if (!btp)
 		return -ENOENT;
 
 	raw_tp = kmalloc(sizeof(*raw_tp), GFP_USER | __GFP_ZERO);
 	if (!raw_tp)
 		return -ENOMEM;
-	raw_tp->tp = tp;
+	raw_tp->btp = btp;
 
 	prog = bpf_prog_get_type(attr->raw_tracepoint.prog_fd,
 				 BPF_PROG_TYPE_RAW_TRACEPOINT);
@@ -1365,7 +1365,7 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
 		goto out_free_tp;
 	}
 
-	err = bpf_probe_register(raw_tp->tp, prog);
+	err = bpf_probe_register(raw_tp->btp, prog);
 	if (err)
 		goto out_put_prog;
 
@@ -1373,7 +1373,7 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
 	tp_fd = anon_inode_getfd("bpf-raw-tracepoint", &bpf_raw_tp_fops, raw_tp,
 				 O_CLOEXEC);
 	if (tp_fd < 0) {
-		bpf_probe_unregister(raw_tp->tp, prog);
+		bpf_probe_unregister(raw_tp->btp, prog);
 		err = tp_fd;
 		goto out_put_prog;
 	}
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index eb58ef156d36..d0975094cff7 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -965,6 +965,22 @@ int perf_event_query_prog_array(struct perf_event *event, void __user *info)
 	return ret;
 }
 
+extern struct bpf_raw_event_map __start__bpf_raw_tp;
+extern struct bpf_raw_event_map __stop__bpf_raw_tp;
+
+struct bpf_raw_event_map *bpf_find_raw_tracepoint(const char *name)
+{
+	const struct bpf_raw_event_map *btp = &__start__bpf_raw_tp;
+	int i = 0;
+
+	for (; btp < &__stop__bpf_raw_tp; btp++) {
+		i++;
+		if (!strcmp(btp->tp->name, name))
+			return btp;
+	}
+	return NULL;
+}
+
 static __always_inline
 void __bpf_trace_run(struct bpf_prog *prog, u64 *args)
 {
@@ -1020,10 +1036,9 @@ BPF_TRACE_DEFN_x(10);
 BPF_TRACE_DEFN_x(11);
 BPF_TRACE_DEFN_x(12);
 
-static int __bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog)
+static int __bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *prog)
 {
-	unsigned long addr;
-	char buf[128];
+	struct tracepoint *tp = btp->tp;
 
 	/*
 	 * check that program doesn't access arguments beyond what's
@@ -1032,43 +1047,25 @@ static int __bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog)
 	if (prog->aux->max_ctx_offset > tp->num_args * sizeof(u64))
 		return -EINVAL;
 
-	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
-	addr = kallsyms_lookup_name(buf);
-	if (!addr)
-		return -ENOENT;
-
-	return tracepoint_probe_register(tp, (void *)addr, prog);
+	return tracepoint_probe_register(tp, (void *)btp->bpf_func, prog);
 }
 
-int bpf_probe_register(struct tracepoint *tp, struct bpf_prog *prog)
+int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *prog)
 {
 	int err;
 
 	mutex_lock(&bpf_event_mutex);
-	err = __bpf_probe_register(tp, prog);
+	err = __bpf_probe_register(btp, prog);
 	mutex_unlock(&bpf_event_mutex);
 	return err;
 }
 
-static int __bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog)
-{
-	unsigned long addr;
-	char buf[128];
-
-	snprintf(buf, sizeof(buf), "__bpf_trace_%s", tp->name);
-	addr = kallsyms_lookup_name(buf);
-	if (!addr)
-		return -ENOENT;
-
-	return tracepoint_probe_unregister(tp, (void *)addr, prog);
-}
-
-int bpf_probe_unregister(struct tracepoint *tp, struct bpf_prog *prog)
+int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_prog *prog)
 {
 	int err;
 
 	mutex_lock(&bpf_event_mutex);
-	err = __bpf_probe_unregister(tp, prog);
+	err = tracepoint_probe_unregister(btp->tp, (void *)btp->bpf_func, prog);
 	mutex_unlock(&bpf_event_mutex);
 	return err;
 }

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
  2018-03-27 21:04           ` Steven Rostedt
@ 2018-03-27 22:48             ` Alexei Starovoitov
  -1 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27 22:48 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers, Kees Cook

On 3/27/18 2:04 PM, Steven Rostedt wrote:
>
> +#ifdef CONFIG_BPF_EVENTS
> +#define BPF_RAW_TP() . = ALIGN(8);		\
> +			 VMLINUX_SYMBOL(__start__bpf_raw_tp) = .;	\
> +			 KEEP(*(__bpf_raw_tp_map))			\
> +			 VMLINUX_SYMBOL(__stop__bpf_raw_tp) = .;

that looks to be correct, but something wrong with it.

Can you try your mini test with kasan on ?

I'm seeing this crash:
test_stacktrace_[   18.760662] start ffffffff84642438 stop ffffffff84644f60
map_raw_tp:PASS:[   18.761467] i 1 btp->tp cccccccccccccccc
prog_load raw tp[   18.762064] kasan: CONFIG_KASAN_INLINE enabled
  0 nsec
[   18.762704] kasan: GPF could be caused by NULL-ptr deref or user 
memory access
[   18.765125] general protection fault: 0000 [#1] SMP KASAN PTI
[   18.765830] Modules linked in:
[   18.778358] Call Trace:
[   18.778674]  bpf_raw_tracepoint_open.isra.27+0x92/0x380

for some reason the start_bpf_raw_tp is off by 8.
Not sure how it works for you.

(gdb) p &__bpf_trace_tp_map_sys_exit
$10 = (struct bpf_raw_event_map *) 0xffffffff84642440 
<__bpf_trace_tp_map_sys_exit>

(gdb)  p &__start__bpf_raw_tp
$7 = (<data variable, no debug info> *) 0xffffffff84642438

(gdb)  p (void*)(&__start__bpf_raw_tp)+8
$11 = (void *) 0xffffffff84642440 <__bpf_trace_tp_map_sys_exit>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
@ 2018-03-27 22:48             ` Alexei Starovoitov
  0 siblings, 0 replies; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-27 22:48 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: davem, daniel, torvalds, peterz, netdev, kernel-team, linux-api,
	Mathieu Desnoyers, Kees Cook

On 3/27/18 2:04 PM, Steven Rostedt wrote:
>
> +#ifdef CONFIG_BPF_EVENTS
> +#define BPF_RAW_TP() . = ALIGN(8);		\
> +			 VMLINUX_SYMBOL(__start__bpf_raw_tp) = .;	\
> +			 KEEP(*(__bpf_raw_tp_map))			\
> +			 VMLINUX_SYMBOL(__stop__bpf_raw_tp) = .;

that looks to be correct, but something wrong with it.

Can you try your mini test with kasan on ?

I'm seeing this crash:
test_stacktrace_[   18.760662] start ffffffff84642438 stop ffffffff84644f60
map_raw_tp:PASS:[   18.761467] i 1 btp->tp cccccccccccccccc
prog_load raw tp[   18.762064] kasan: CONFIG_KASAN_INLINE enabled
  0 nsec
[   18.762704] kasan: GPF could be caused by NULL-ptr deref or user 
memory access
[   18.765125] general protection fault: 0000 [#1] SMP KASAN PTI
[   18.765830] Modules linked in:
[   18.778358] Call Trace:
[   18.778674]  bpf_raw_tracepoint_open.isra.27+0x92/0x380

for some reason the start_bpf_raw_tp is off by 8.
Not sure how it works for you.

(gdb) p &__bpf_trace_tp_map_sys_exit
$10 = (struct bpf_raw_event_map *) 0xffffffff84642440 
<__bpf_trace_tp_map_sys_exit>

(gdb)  p &__start__bpf_raw_tp
$7 = (<data variable, no debug info> *) 0xffffffff84642438

(gdb)  p (void*)(&__start__bpf_raw_tp)+8
$11 = (void *) 0xffffffff84642440 <__bpf_trace_tp_map_sys_exit>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
  2018-03-27 22:48             ` Alexei Starovoitov
  (?)
@ 2018-03-27 23:13             ` Mathieu Desnoyers
  2018-03-28  0:00               ` Alexei Starovoitov
  -1 siblings, 1 reply; 57+ messages in thread
From: Mathieu Desnoyers @ 2018-03-27 23:13 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: rostedt, David S. Miller, Daniel Borkmann, Linus Torvalds,
	Peter Zijlstra, netdev, kernel-team, linux-api, Kees Cook

----- On Mar 27, 2018, at 6:48 PM, Alexei Starovoitov ast@fb.com wrote:

> On 3/27/18 2:04 PM, Steven Rostedt wrote:
>>
>> +#ifdef CONFIG_BPF_EVENTS
>> +#define BPF_RAW_TP() . = ALIGN(8);		\

Given that the section consists of a 16-bytes structure elements
on architectures with 8 bytes pointers, this ". = ALIGN(8)" should
be turned into a STRUCT_ALIGN(), especially given that the compiler
is free to up-align the structure on 32 bytes.

This could explain the kasan splat you are experiencing.

Thanks,

Mathieu


>> +			 VMLINUX_SYMBOL(__start__bpf_raw_tp) = .;	\
>> +			 KEEP(*(__bpf_raw_tp_map))			\
>> +			 VMLINUX_SYMBOL(__stop__bpf_raw_tp) = .;
> 
> that looks to be correct, but something wrong with it.
> 
> Can you try your mini test with kasan on ?
> 
> I'm seeing this crash:
> test_stacktrace_[   18.760662] start ffffffff84642438 stop ffffffff84644f60
> map_raw_tp:PASS:[   18.761467] i 1 btp->tp cccccccccccccccc
> prog_load raw tp[   18.762064] kasan: CONFIG_KASAN_INLINE enabled
>  0 nsec
> [   18.762704] kasan: GPF could be caused by NULL-ptr deref or user
> memory access
> [   18.765125] general protection fault: 0000 [#1] SMP KASAN PTI
> [   18.765830] Modules linked in:
> [   18.778358] Call Trace:
> [   18.778674]  bpf_raw_tracepoint_open.isra.27+0x92/0x380
> 
> for some reason the start_bpf_raw_tp is off by 8.
> Not sure how it works for you.
> 
> (gdb) p &__bpf_trace_tp_map_sys_exit
> $10 = (struct bpf_raw_event_map *) 0xffffffff84642440
> <__bpf_trace_tp_map_sys_exit>
> 
> (gdb)  p &__start__bpf_raw_tp
> $7 = (<data variable, no debug info> *) 0xffffffff84642438
> 
> (gdb)  p (void*)(&__start__bpf_raw_tp)+8
> $11 = (void *) 0xffffffff84642440 <__bpf_trace_tp_map_sys_exit>

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
  2018-03-27 23:13             ` Mathieu Desnoyers
@ 2018-03-28  0:00               ` Alexei Starovoitov
  2018-03-28  0:44                 ` Mathieu Desnoyers
  0 siblings, 1 reply; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-28  0:00 UTC (permalink / raw)
  To: Mathieu Desnoyers
  Cc: rostedt, David S. Miller, Daniel Borkmann, Linus Torvalds,
	Peter Zijlstra, netdev, kernel-team, linux-api, Kees Cook

On 3/27/18 4:13 PM, Mathieu Desnoyers wrote:
> ----- On Mar 27, 2018, at 6:48 PM, Alexei Starovoitov ast@fb.com wrote:
>
>> On 3/27/18 2:04 PM, Steven Rostedt wrote:
>>>
>>> +#ifdef CONFIG_BPF_EVENTS
>>> +#define BPF_RAW_TP() . = ALIGN(8);		\
>
> Given that the section consists of a 16-bytes structure elements
> on architectures with 8 bytes pointers, this ". = ALIGN(8)" should
> be turned into a STRUCT_ALIGN(), especially given that the compiler
> is free to up-align the structure on 32 bytes.

STRUCT_ALIGN fixed the 'off by 8' issue with kasan,
but it fails without kasan too.
For some reason the whole region __start__bpf_raw_tp - __stop__bpf_raw_tp
comes inited with cccc:
[   22.703562] i 1 btp ffffffff8288e530 btp->tp cccccccccccccccc func 
cccccccccccccccc
[   22.704638] i 2 btp ffffffff8288e540 btp->tp cccccccccccccccc func 
cccccccccccccccc
[   22.705599] i 3 btp ffffffff8288e550 btp->tp cccccccccccccccc func 
cccccccccccccccc
[   22.706551] i 4 btp ffffffff8288e560 btp->tp cccccccccccccccc func 
cccccccccccccccc
[   22.707503] i 5 btp ffffffff8288e570 btp->tp cccccccccccccccc func 
cccccccccccccccc
[   22.708452] i 6 btp ffffffff8288e580 btp->tp cccccccccccccccc func 
cccccccccccccccc
[   22.709406] i 7 btp ffffffff8288e590 btp->tp cccccccccccccccc func 
cccccccccccccccc
[   22.710368] i 8 btp ffffffff8288e5a0 btp->tp cccccccccccccccc func 
cccccccccccccccc

while gdb shows that everything is good inside vmlinux
for exactly these addresses.
Some other linker magic missing?

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
  2018-03-28  0:00               ` Alexei Starovoitov
@ 2018-03-28  0:44                 ` Mathieu Desnoyers
  2018-03-28  0:51                   ` Alexei Starovoitov
  0 siblings, 1 reply; 57+ messages in thread
From: Mathieu Desnoyers @ 2018-03-28  0:44 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: rostedt, David S. Miller, Daniel Borkmann, Linus Torvalds,
	Peter Zijlstra, netdev, kernel-team, linux-api, Kees Cook

----- On Mar 27, 2018, at 8:00 PM, Alexei Starovoitov ast@fb.com wrote:

> On 3/27/18 4:13 PM, Mathieu Desnoyers wrote:
>> ----- On Mar 27, 2018, at 6:48 PM, Alexei Starovoitov ast@fb.com wrote:
>>
>>> On 3/27/18 2:04 PM, Steven Rostedt wrote:
>>>>
>>>> +#ifdef CONFIG_BPF_EVENTS
>>>> +#define BPF_RAW_TP() . = ALIGN(8);		\
>>
>> Given that the section consists of a 16-bytes structure elements
>> on architectures with 8 bytes pointers, this ". = ALIGN(8)" should
>> be turned into a STRUCT_ALIGN(), especially given that the compiler
>> is free to up-align the structure on 32 bytes.
> 
> STRUCT_ALIGN fixed the 'off by 8' issue with kasan,
> but it fails without kasan too.
> For some reason the whole region __start__bpf_raw_tp - __stop__bpf_raw_tp
> comes inited with cccc:
> [   22.703562] i 1 btp ffffffff8288e530 btp->tp cccccccccccccccc func
> cccccccccccccccc
> [   22.704638] i 2 btp ffffffff8288e540 btp->tp cccccccccccccccc func
> cccccccccccccccc
> [   22.705599] i 3 btp ffffffff8288e550 btp->tp cccccccccccccccc func
> cccccccccccccccc
> [   22.706551] i 4 btp ffffffff8288e560 btp->tp cccccccccccccccc func
> cccccccccccccccc
> [   22.707503] i 5 btp ffffffff8288e570 btp->tp cccccccccccccccc func
> cccccccccccccccc
> [   22.708452] i 6 btp ffffffff8288e580 btp->tp cccccccccccccccc func
> cccccccccccccccc
> [   22.709406] i 7 btp ffffffff8288e590 btp->tp cccccccccccccccc func
> cccccccccccccccc
> [   22.710368] i 8 btp ffffffff8288e5a0 btp->tp cccccccccccccccc func
> cccccccccccccccc
> 
> while gdb shows that everything is good inside vmlinux
> for exactly these addresses.
> Some other linker magic missing?

No, Steven's iteration code is incorrect.

+extern struct bpf_raw_event_map __start__bpf_raw_tp;
+extern struct bpf_raw_event_map __stop__bpf_raw_tp;

That should be:

extern struct bpf_raw_event_map __start__bpf_raw_tp[];
extern struct bpf_raw_event_map __stop__bpf_raw_tp[];


+
+struct bpf_raw_event_map *bpf_find_raw_tracepoint(const char *name)
+{
+        const struct bpf_raw_event_map *btp = &__start__bpf_raw_tp;

const struct bpf_raw_event_map *btp = __start__bpf_raw_tp;

+        int i = 0;
+
+        for (; btp < &__stop__bpf_raw_tp; btp++) {

for (; btp < __stop__bpf_raw_tp; btp++) {

Those start/stop symbols are given their address by the linker
automatically (this is a GNU linker extension). We don't want
pointers to the symbols, but rather the symbols per se to act
as start/stop addresses.

Thanks,

Mathieu

+                i++;
+                if (!strcmp(btp->tp->name, name))
+                        return btp;
+        }
+        return NULL;
+}



-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
  2018-03-28  0:44                 ` Mathieu Desnoyers
@ 2018-03-28  0:51                   ` Alexei Starovoitov
  2018-03-28 14:06                     ` Steven Rostedt
  0 siblings, 1 reply; 57+ messages in thread
From: Alexei Starovoitov @ 2018-03-28  0:51 UTC (permalink / raw)
  To: Mathieu Desnoyers
  Cc: rostedt, David S. Miller, Daniel Borkmann, Linus Torvalds,
	Peter Zijlstra, netdev, kernel-team, linux-api, Kees Cook

On 3/27/18 5:44 PM, Mathieu Desnoyers wrote:
> ----- On Mar 27, 2018, at 8:00 PM, Alexei Starovoitov ast@fb.com wrote:
>
>> On 3/27/18 4:13 PM, Mathieu Desnoyers wrote:
>>> ----- On Mar 27, 2018, at 6:48 PM, Alexei Starovoitov ast@fb.com wrote:
>>>
>>>> On 3/27/18 2:04 PM, Steven Rostedt wrote:
>>>>>
>>>>> +#ifdef CONFIG_BPF_EVENTS
>>>>> +#define BPF_RAW_TP() . = ALIGN(8);		\
>>>
>>> Given that the section consists of a 16-bytes structure elements
>>> on architectures with 8 bytes pointers, this ". = ALIGN(8)" should
>>> be turned into a STRUCT_ALIGN(), especially given that the compiler
>>> is free to up-align the structure on 32 bytes.
>>
>> STRUCT_ALIGN fixed the 'off by 8' issue with kasan,
>> but it fails without kasan too.
>> For some reason the whole region __start__bpf_raw_tp - __stop__bpf_raw_tp
>> comes inited with cccc:
>> [   22.703562] i 1 btp ffffffff8288e530 btp->tp cccccccccccccccc func
>> cccccccccccccccc
>> [   22.704638] i 2 btp ffffffff8288e540 btp->tp cccccccccccccccc func
>> cccccccccccccccc
>> [   22.705599] i 3 btp ffffffff8288e550 btp->tp cccccccccccccccc func
>> cccccccccccccccc
>> [   22.706551] i 4 btp ffffffff8288e560 btp->tp cccccccccccccccc func
>> cccccccccccccccc
>> [   22.707503] i 5 btp ffffffff8288e570 btp->tp cccccccccccccccc func
>> cccccccccccccccc
>> [   22.708452] i 6 btp ffffffff8288e580 btp->tp cccccccccccccccc func
>> cccccccccccccccc
>> [   22.709406] i 7 btp ffffffff8288e590 btp->tp cccccccccccccccc func
>> cccccccccccccccc
>> [   22.710368] i 8 btp ffffffff8288e5a0 btp->tp cccccccccccccccc func
>> cccccccccccccccc
>>
>> while gdb shows that everything is good inside vmlinux
>> for exactly these addresses.
>> Some other linker magic missing?
>
> No, Steven's iteration code is incorrect.
>
> +extern struct bpf_raw_event_map __start__bpf_raw_tp;
> +extern struct bpf_raw_event_map __stop__bpf_raw_tp;
>
> That should be:
>
> extern struct bpf_raw_event_map __start__bpf_raw_tp[];
> extern struct bpf_raw_event_map __stop__bpf_raw_tp[];
>
>
> +
> +struct bpf_raw_event_map *bpf_find_raw_tracepoint(const char *name)
> +{
> +        const struct bpf_raw_event_map *btp = &__start__bpf_raw_tp;
>
> const struct bpf_raw_event_map *btp = __start__bpf_raw_tp;
>
> +        int i = 0;
> +
> +        for (; btp < &__stop__bpf_raw_tp; btp++) {
>
> for (; btp < __stop__bpf_raw_tp; btp++) {
>
> Those start/stop symbols are given their address by the linker
> automatically (this is a GNU linker extension). We don't want
> pointers to the symbols, but rather the symbols per se to act
> as start/stop addresses.

right. that part I fixed first.

Turned out it was in init.data section and got poisoned.
this fixes it:
@@ -258,6 +258,7 @@
         LIKELY_PROFILE()                                                \
         BRANCH_PROFILE()                                                \
         TRACE_PRINTKS()                                                 \
+       BPF_RAW_TP()                                                    \
         TRACEPOINT_STR()

  /*
@@ -585,7 +586,6 @@
         *(.init.rodata)                                                 \
         FTRACE_EVENTS()                                                 \
         TRACE_SYSCALLS()                                                \
-       BPF_RAW_TP()                                                    \
         KPROBE_BLACKLIST()                                              \
         ERROR_INJECT_WHITELIST()                                        \
         MEM_DISCARD(init.rodata)                                        \

and it works :)
I will clean few other nits I found while debugging and respin.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT
  2018-03-28  0:51                   ` Alexei Starovoitov
@ 2018-03-28 14:06                     ` Steven Rostedt
  0 siblings, 0 replies; 57+ messages in thread
From: Steven Rostedt @ 2018-03-28 14:06 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Mathieu Desnoyers, David S. Miller, Daniel Borkmann,
	Linus Torvalds, Peter Zijlstra, netdev, kernel-team, linux-api,
	Kees Cook

On Tue, 27 Mar 2018 17:51:55 -0700
Alexei Starovoitov <ast@fb.com> wrote:

> Turned out it was in init.data section and got poisoned.
> this fixes it:
> @@ -258,6 +258,7 @@
>          LIKELY_PROFILE()                                                \
>          BRANCH_PROFILE()                                                \
>          TRACE_PRINTKS()                                                 \
> +       BPF_RAW_TP()                                                    \
>          TRACEPOINT_STR()
> 
>   /*
> @@ -585,7 +586,6 @@
>          *(.init.rodata)                                                 \
>          FTRACE_EVENTS()                                                 \
>          TRACE_SYSCALLS()                                                \
> -       BPF_RAW_TP()                                                    \
>          KPROBE_BLACKLIST()                                              \
>          ERROR_INJECT_WHITELIST()                                        \
>          MEM_DISCARD(init.rodata)                                        \
> 
> and it works :)
> I will clean few other nits I found while debugging and respin.

Getting it properly working was an exercise left to the reader ;-)

Sorry about that, I did a bit of copy and paste to get it working, and
copied from code that did things a bit differently, so I massaged it by
hand, and doing it quickly as I had other things to work on.

-- Steve

^ permalink raw reply	[flat|nested] 57+ messages in thread

end of thread, other threads:[~2018-03-28 14:06 UTC | newest]

Thread overview: 57+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-27  2:46 [PATCH v6 bpf-next 00/11] bpf, tracing: introduce bpf raw tracepoints Alexei Starovoitov
2018-03-27  2:46 ` Alexei Starovoitov
2018-03-27  2:46 ` [PATCH v6 bpf-next 01/11] treewide: remove large struct-pass-by-value from tracepoint arguments Alexei Starovoitov
2018-03-27  2:46   ` Alexei Starovoitov
2018-03-27  2:46 ` [PATCH v6 bpf-next 02/11] net/mediatek: disambiguate mt76 vs mt7601u trace events Alexei Starovoitov
2018-03-27  2:46   ` Alexei Starovoitov
2018-03-27  2:46 ` [PATCH v6 bpf-next 03/11] net/mac802154: disambiguate mac80215 vs mac802154 " Alexei Starovoitov
2018-03-27  2:46   ` Alexei Starovoitov
2018-03-27  2:46 ` [PATCH v6 bpf-next 04/11] net/wireless/iwlwifi: fix iwlwifi_dev_ucode_error tracepoint Alexei Starovoitov
2018-03-27  2:46   ` Alexei Starovoitov
2018-03-27  2:47 ` [PATCH v6 bpf-next 05/11] macro: introduce COUNT_ARGS() macro Alexei Starovoitov
2018-03-27  2:47   ` Alexei Starovoitov
2018-03-27  2:47 ` [PATCH v6 bpf-next 06/11] tracepoint: compute num_args at build time Alexei Starovoitov
2018-03-27  2:47   ` Alexei Starovoitov
2018-03-27 15:15   ` Steven Rostedt
2018-03-27 15:15     ` Steven Rostedt
2018-03-27  2:47 ` [PATCH v6 bpf-next 07/11] tracepoint: introduce kernel_tracepoint_find_by_name Alexei Starovoitov
2018-03-27  2:47   ` Alexei Starovoitov
2018-03-27 14:07   ` Steven Rostedt
2018-03-27 14:07     ` Steven Rostedt
2018-03-27 14:18     ` Mathieu Desnoyers
2018-03-27 14:42       ` Steven Rostedt
2018-03-27 15:53         ` Alexei Starovoitov
2018-03-27 16:09           ` Mathieu Desnoyers
2018-03-27 16:36           ` Daniel Borkmann
2018-03-27  2:47 ` [PATCH v6 bpf-next 08/11] bpf: introduce BPF_RAW_TRACEPOINT Alexei Starovoitov
2018-03-27  2:47   ` Alexei Starovoitov
2018-03-27 17:02   ` Steven Rostedt
2018-03-27 17:02     ` Steven Rostedt
2018-03-27 17:11     ` Steven Rostedt
2018-03-27 17:11       ` Steven Rostedt
2018-03-27 18:58       ` Steven Rostedt
2018-03-27 18:58         ` Steven Rostedt
2018-03-27 21:04         ` Steven Rostedt
2018-03-27 21:04           ` Steven Rostedt
2018-03-27 22:48           ` Alexei Starovoitov
2018-03-27 22:48             ` Alexei Starovoitov
2018-03-27 23:13             ` Mathieu Desnoyers
2018-03-28  0:00               ` Alexei Starovoitov
2018-03-28  0:44                 ` Mathieu Desnoyers
2018-03-28  0:51                   ` Alexei Starovoitov
2018-03-28 14:06                     ` Steven Rostedt
2018-03-27 18:45     ` Alexei Starovoitov
2018-03-27 18:45       ` Alexei Starovoitov
2018-03-27 19:00       ` Steven Rostedt
2018-03-27 19:00         ` Steven Rostedt
2018-03-27 19:07         ` Steven Rostedt
2018-03-27 19:07           ` Steven Rostedt
2018-03-27 19:10         ` Steven Rostedt
2018-03-27 19:10           ` Steven Rostedt
2018-03-27 19:10         ` Mathieu Desnoyers
2018-03-27  2:47 ` [PATCH v6 bpf-next 09/11] libbpf: add bpf_raw_tracepoint_open helper Alexei Starovoitov
2018-03-27  2:47   ` Alexei Starovoitov
2018-03-27  2:47 ` [PATCH v6 bpf-next 10/11] samples/bpf: raw tracepoint test Alexei Starovoitov
2018-03-27  2:47   ` Alexei Starovoitov
2018-03-27  2:47 ` [PATCH v6 bpf-next 11/11] selftests/bpf: test for bpf_get_stackid() from raw tracepoints Alexei Starovoitov
2018-03-27  2:47   ` Alexei Starovoitov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.