All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 bpf-next 0/3] bpf: introduce bpf_get_branch_snapshot
@ 2021-09-07 20:27 Song Liu
  2021-09-07 20:28 ` [PATCH v6 bpf-next 1/3] perf: enable branch record for software events Song Liu
                   ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Song Liu @ 2021-09-07 20:27 UTC (permalink / raw)
  To: bpf, linux-kernel; +Cc: acme, peterz, mingo, kjain, kernel-team, Song Liu

Changes v4 => v5
1. Modify perf_snapshot_branch_stack_t to save some memcpy. (Andrii)
2. Minor fixes in selftests. (Andrii)

Changes v3 => v4:
1. Do not reshuffle intel_pmu_disable_all(). Use some inline to save LBR
   entries. (Peter)
2. Move static_call(perf_snapshot_branch_stack) to the helper. (Alexei)
3. Add argument flags to bpf_get_branch_snapshot. (Andrii)
4. Make MAX_BRANCH_SNAPSHOT an enum (Andrii). And rename it as
   PERF_MAX_BRANCH_SNAPSHOT
5. Make bpf_get_branch_snapshot similar to bpf_read_branch_records.
   (Andrii)
6. Move the test target function to bpf_testmod. Updated kallsyms_find_next
   to work properly with modules. (Andrii)

Changes v2 => v3:
1. Fix the use of static_call. (Peter)
2. Limit the use to perfmon version >= 2. (Peter)
3. Modify intel_pmu_snapshot_branch_stack() to use intel_pmu_disable_all
   and intel_pmu_enable_all().

Changes v1 => v2:
1. Rename the helper as bpf_get_branch_snapshot;
2. Fix/simplify the use of static_call;
3. Instead of percpu variables, let intel_pmu_snapshot_branch_stack output
   branch records to an output argument of type perf_branch_snapshot.

Branch stack can be very useful in understanding software events. For
example, when a long function, e.g. sys_perf_event_open, returns an errno,
it is not obvious why the function failed. Branch stack could provide very
helpful information in this type of scenarios.

This set adds support to read branch stack with a new BPF helper
bpf_get_branch_trace(). Currently, this is only supported in Intel systems.
It is also possible to support the same feaure for PowerPC.

The hardware that records the branch stace is not stopped automatically on
software events. Therefore, it is necessary to stop it in software soon.
Otherwise, the hardware buffers/registers will be flushed. One of the key
design consideration in this set is to minimize the number of branch record
entries between the event triggers and the hardware recorder is stopped.
Based on this goal, current design is different from the discussions in
original RFC [1]:
 1) Static call is used when supported, to save function pointer
    dereference;
 2) intel_pmu_lbr_disable_all is used instead of perf_pmu_disable(),
    because the latter uses about 10 entries before stopping LBR.

With current code, on Intel CPU, LBR is stopped after 10 branch entries
after fexit triggers:

ID: 0 from intel_pmu_lbr_disable_all+58 to intel_pmu_lbr_disable_all+93
ID: 1 from intel_pmu_lbr_disable_all+54 to intel_pmu_lbr_disable_all+58
ID: 2 from intel_pmu_snapshot_branch_stack+102 to intel_pmu_lbr_disable_all+0
ID: 3 from bpf_get_branch_snapshot+18 to intel_pmu_snapshot_branch_stack+0
ID: 4 from bpf_get_branch_snapshot+18 to bpf_get_branch_snapshot+0
ID: 5 from __brk_limit+474918983 to bpf_get_branch_snapshot+0
ID: 6 from __bpf_prog_enter+34 to __brk_limit+474918971
ID: 7 from migrate_disable+60 to __bpf_prog_enter+9
ID: 8 from __bpf_prog_enter+4 to migrate_disable+0
ID: 9 from bpf_testmod_loop_test+20 to __bpf_prog_enter+0
ID: 10 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
ID: 11 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
ID: 12 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
ID: 13 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
...

[1] https://lore.kernel.org/bpf/20210818012937.2522409-1-songliubraving@fb.com/

Song Liu (3):
  perf: enable branch record for software events
  bpf: introduce helper bpf_get_branch_snapshot
  selftests/bpf: add test for bpf_get_branch_snapshot

 arch/x86/events/intel/core.c                  |  29 ++++-
 arch/x86/events/intel/ds.c                    |   8 --
 arch/x86/events/perf_event.h                  |  10 +-
 include/linux/perf_event.h                    |  23 ++++
 include/uapi/linux/bpf.h                      |  22 ++++
 kernel/bpf/trampoline.c                       |   3 +-
 kernel/events/core.c                          |   2 +
 kernel/trace/bpf_trace.c                      |  30 ++++++
 tools/include/uapi/linux/bpf.h                |  22 ++++
 .../selftests/bpf/bpf_testmod/bpf_testmod.c   |  19 +++-
 .../selftests/bpf/prog_tests/core_reloc.c     |  14 +--
 .../bpf/prog_tests/get_branch_snapshot.c      | 100 ++++++++++++++++++
 .../selftests/bpf/prog_tests/module_attach.c  |  39 -------
 .../selftests/bpf/progs/get_branch_snapshot.c |  40 +++++++
 tools/testing/selftests/bpf/test_progs.c      |  39 +++++++
 tools/testing/selftests/bpf/test_progs.h      |   2 +
 tools/testing/selftests/bpf/trace_helpers.c   |  37 +++++++
 tools/testing/selftests/bpf/trace_helpers.h   |   5 +
 18 files changed, 378 insertions(+), 66 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c
 create mode 100644 tools/testing/selftests/bpf/progs/get_branch_snapshot.c

--
2.30.2

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v6 bpf-next 1/3] perf: enable branch record for software events
  2021-09-07 20:27 [PATCH v6 bpf-next 0/3] bpf: introduce bpf_get_branch_snapshot Song Liu
@ 2021-09-07 20:28 ` Song Liu
  2021-09-10 10:40   ` Peter Zijlstra
  2021-09-07 20:28 ` [PATCH v6 bpf-next 2/3] bpf: introduce helper bpf_get_branch_snapshot Song Liu
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 16+ messages in thread
From: Song Liu @ 2021-09-07 20:28 UTC (permalink / raw)
  To: bpf, linux-kernel
  Cc: acme, peterz, mingo, kjain, kernel-team, Song Liu,
	John Fastabend, Andrii Nakryiko

The typical way to access branch record (e.g. Intel LBR) is via hardware
perf_event. For CPUs with FREEZE_LBRS_ON_PMI support, PMI could capture
reliable LBR. On the other hand, LBR could also be useful in non-PMI
scenario. For example, in kretprobe or bpf fexit program, LBR could
provide a lot of information on what happened with the function. Add API
to use branch record for software use.

Note that, when the software event triggers, it is necessary to stop the
branch record hardware asap. Therefore, static_call is used to remove some
branch instructions in this process.

Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Song Liu <songliubraving@fb.com>
---
 arch/x86/events/intel/core.c | 29 ++++++++++++++++++++++++++---
 arch/x86/events/intel/ds.c   |  8 --------
 arch/x86/events/perf_event.h | 10 ++++++++--
 include/linux/perf_event.h   | 23 +++++++++++++++++++++++
 kernel/events/core.c         |  2 ++
 5 files changed, 59 insertions(+), 13 deletions(-)

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 7011e87be6d03..2e318fb8d649b 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -2143,7 +2143,7 @@ static __initconst const u64 knl_hw_cache_extra_regs
  * However, there are some cases which may change PEBS status, e.g. PMI
  * throttle. The PEBS_ENABLE should be updated where the status changes.
  */
-static void __intel_pmu_disable_all(void)
+static __always_inline void __intel_pmu_disable_all(void)
 {
 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
 
@@ -2153,7 +2153,7 @@ static void __intel_pmu_disable_all(void)
 		intel_pmu_disable_bts();
 }
 
-static void intel_pmu_disable_all(void)
+static __always_inline void intel_pmu_disable_all(void)
 {
 	__intel_pmu_disable_all();
 	intel_pmu_pebs_disable_all();
@@ -2186,6 +2186,23 @@ static void intel_pmu_enable_all(int added)
 	__intel_pmu_enable_all(added, false);
 }
 
+static int
+intel_pmu_snapshot_branch_stack(struct perf_branch_entry *entries, unsigned int cnt)
+{
+	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+	unsigned long flags;
+
+	local_irq_save(flags);
+	intel_pmu_disable_all();
+	intel_pmu_lbr_read();
+	cnt = min_t(unsigned int, cnt, x86_pmu.lbr_nr);
+
+	memcpy(entries, cpuc->lbr_entries, sizeof(struct perf_branch_entry) * cnt);
+	intel_pmu_enable_all(0);
+	local_irq_restore(flags);
+	return cnt;
+}
+
 /*
  * Workaround for:
  *   Intel Errata AAK100 (model 26)
@@ -6283,9 +6300,15 @@ __init int intel_pmu_init(void)
 			x86_pmu.lbr_nr = 0;
 	}
 
-	if (x86_pmu.lbr_nr)
+	if (x86_pmu.lbr_nr) {
 		pr_cont("%d-deep LBR, ", x86_pmu.lbr_nr);
 
+		/* only support branch_stack snapshot for perfmon >= v2 */
+		if (x86_pmu.disable_all == intel_pmu_disable_all)
+			static_call_update(perf_snapshot_branch_stack,
+					   intel_pmu_snapshot_branch_stack);
+	}
+
 	intel_pmu_check_extra_regs(x86_pmu.extra_regs);
 
 	/* Support full width counters using alternative MSR range */
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 8647713276a73..8a832986578a9 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -1296,14 +1296,6 @@ void intel_pmu_pebs_enable_all(void)
 		wrmsrl(MSR_IA32_PEBS_ENABLE, cpuc->pebs_enabled);
 }
 
-void intel_pmu_pebs_disable_all(void)
-{
-	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
-
-	if (cpuc->pebs_enabled)
-		wrmsrl(MSR_IA32_PEBS_ENABLE, 0);
-}
-
 static int intel_pmu_pebs_fixup_ip(struct pt_regs *regs)
 {
 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index e3ac05c97b5e5..171abbb359fe5 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -1240,6 +1240,14 @@ static inline bool intel_pmu_has_bts(struct perf_event *event)
 	return intel_pmu_has_bts_period(event, hwc->sample_period);
 }
 
+static __always_inline void intel_pmu_pebs_disable_all(void)
+{
+	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+
+	if (cpuc->pebs_enabled)
+		wrmsrl(MSR_IA32_PEBS_ENABLE, 0);
+}
+
 int intel_pmu_save_and_restart(struct perf_event *event);
 
 struct event_constraint *
@@ -1314,8 +1322,6 @@ void intel_pmu_pebs_disable(struct perf_event *event);
 
 void intel_pmu_pebs_enable_all(void);
 
-void intel_pmu_pebs_disable_all(void);
-
 void intel_pmu_pebs_sched_task(struct perf_event_context *ctx, bool sched_in);
 
 void intel_pmu_auto_reload_read(struct perf_event *event);
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index fe156a8170aa3..0cbc5dfe11102 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -57,6 +57,7 @@ struct perf_guest_info_callbacks {
 #include <linux/cgroup.h>
 #include <linux/refcount.h>
 #include <linux/security.h>
+#include <linux/static_call.h>
 #include <asm/local.h>
 
 struct perf_callchain_entry {
@@ -1612,4 +1613,26 @@ extern void __weak arch_perf_update_userpage(struct perf_event *event,
 extern __weak u64 arch_perf_get_page_size(struct mm_struct *mm, unsigned long addr);
 #endif
 
+/*
+ * Snapshot branch stack on software events.
+ *
+ * Branch stack can be very useful in understanding software events. For
+ * example, when a long function, e.g. sys_perf_event_open, returns an
+ * errno, it is not obvious why the function failed. Branch stack could
+ * provide very helpful information in this type of scenarios.
+ *
+ * On software event, it is necessary to stop the hardware branch recorder
+ * fast. Otherwise, the hardware register/buffer will be flushed with
+ * entries of the triggering event. Therefore, static call is used to
+ * stop the hardware recorder.
+ */
+
+/*
+ * cnt is the number of entries allocated for entries.
+ * Return number of entries copied to .
+ */
+typedef int (perf_snapshot_branch_stack_t)(struct perf_branch_entry *entries,
+					   unsigned int cnt);
+DECLARE_STATIC_CALL(perf_snapshot_branch_stack, perf_snapshot_branch_stack_t);
+
 #endif /* _LINUX_PERF_EVENT_H */
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 744e8726c5b2f..349f80aa9e7d8 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -13435,3 +13435,5 @@ struct cgroup_subsys perf_event_cgrp_subsys = {
 	.threaded	= true,
 };
 #endif /* CONFIG_CGROUP_PERF */
+
+DEFINE_STATIC_CALL_RET0(perf_snapshot_branch_stack, perf_snapshot_branch_stack_t);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v6 bpf-next 2/3] bpf: introduce helper bpf_get_branch_snapshot
  2021-09-07 20:27 [PATCH v6 bpf-next 0/3] bpf: introduce bpf_get_branch_snapshot Song Liu
  2021-09-07 20:28 ` [PATCH v6 bpf-next 1/3] perf: enable branch record for software events Song Liu
@ 2021-09-07 20:28 ` Song Liu
  2021-09-07 20:28 ` [PATCH v6 bpf-next 3/3] selftests/bpf: add test for bpf_get_branch_snapshot Song Liu
  2021-09-07 20:29 ` [PATCH v6 bpf-next 0/3] bpf: introduce bpf_get_branch_snapshot Song Liu
  3 siblings, 0 replies; 16+ messages in thread
From: Song Liu @ 2021-09-07 20:28 UTC (permalink / raw)
  To: bpf, linux-kernel
  Cc: acme, peterz, mingo, kjain, kernel-team, Song Liu,
	John Fastabend, Andrii Nakryiko

Introduce bpf_get_branch_snapshot(), which allows tracing pogram to get
branch trace from hardware (e.g. Intel LBR). To use the feature, the
user need to create perf_event with proper branch_record filtering
on each cpu, and then calls bpf_get_branch_snapshot in the bpf function.
On Intel CPUs, VLBR event (raw event 0x1b00) can be use for this.

Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Song Liu <songliubraving@fb.com>
---
 include/uapi/linux/bpf.h       | 22 ++++++++++++++++++++++
 kernel/bpf/trampoline.c        |  3 ++-
 kernel/trace/bpf_trace.c       | 30 ++++++++++++++++++++++++++++++
 tools/include/uapi/linux/bpf.h | 22 ++++++++++++++++++++++
 4 files changed, 76 insertions(+), 1 deletion(-)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 791f31dd0abee..b695ef151001e 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -4877,6 +4877,27 @@ union bpf_attr {
  *		Get the struct pt_regs associated with **task**.
  *	Return
  *		A pointer to struct pt_regs.
+ *
+ * long bpf_get_branch_snapshot(void *entries, u32 size, u64 flags)
+ *	Description
+ *		Get branch trace from hardware engines like Intel LBR. The
+ *		hardware engine is stopped shortly after the helper is
+ *		called. Therefore, the user need to filter branch entries
+ *		based on the actual use case. To capture branch trace
+ *		before the trigger point of the BPF program, the helper
+ *		should be called at the beginning of the BPF program.
+ *
+ *		The data is stored as struct perf_branch_entry into output
+ *		buffer *entries*. *size* is the size of *entries* in bytes.
+ *		*flags* is reserved for now and must be zero.
+ *
+ *	Return
+ *		On success, number of bytes written to *buf*. On error, a
+ *		negative value.
+ *
+ *		**-EINVAL** if *flags* is not zero.
+ *
+ *		**-ENOENT** if architecture does not support branch records.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -5055,6 +5076,7 @@ union bpf_attr {
 	FN(get_func_ip),		\
 	FN(get_attach_cookie),		\
 	FN(task_pt_regs),		\
+	FN(get_branch_snapshot),	\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index fe1e857324e66..39eaaff81953d 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -10,6 +10,7 @@
 #include <linux/rcupdate_trace.h>
 #include <linux/rcupdate_wait.h>
 #include <linux/module.h>
+#include <linux/static_call.h>
 
 /* dummy _ops. The verifier will operate on target program's ops. */
 const struct bpf_verifier_ops bpf_extension_verifier_ops = {
@@ -526,7 +527,7 @@ void bpf_trampoline_put(struct bpf_trampoline *tr)
 }
 
 #define NO_START_TIME 1
-static u64 notrace bpf_prog_start_time(void)
+static __always_inline u64 notrace bpf_prog_start_time(void)
 {
 	u64 start = NO_START_TIME;
 
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 8e2eb950aa829..067e88c3d2ee5 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1017,6 +1017,34 @@ static const struct bpf_func_proto bpf_get_attach_cookie_proto_pe = {
 	.arg1_type	= ARG_PTR_TO_CTX,
 };
 
+BPF_CALL_3(bpf_get_branch_snapshot, void *, buf, u32, size, u64, flags)
+{
+#ifndef CONFIG_X86
+	return -ENOENT;
+#else
+	static const u32 br_entry_size = sizeof(struct perf_branch_entry);
+	u32 entry_cnt = size / br_entry_size;
+
+	entry_cnt = static_call(perf_snapshot_branch_stack)(buf, entry_cnt);
+
+	if (unlikely(flags))
+		return -EINVAL;
+
+	if (!entry_cnt)
+		return -ENOENT;
+
+	return entry_cnt * br_entry_size;
+#endif
+}
+
+static const struct bpf_func_proto bpf_get_branch_snapshot_proto = {
+	.func		= bpf_get_branch_snapshot,
+	.gpl_only	= true,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_UNINIT_MEM,
+	.arg2_type	= ARG_CONST_SIZE_OR_ZERO,
+};
+
 static const struct bpf_func_proto *
 bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
@@ -1132,6 +1160,8 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 		return &bpf_snprintf_proto;
 	case BPF_FUNC_get_func_ip:
 		return &bpf_get_func_ip_proto_tracing;
+	case BPF_FUNC_get_branch_snapshot:
+		return &bpf_get_branch_snapshot_proto;
 	default:
 		return bpf_base_func_proto(func_id);
 	}
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 791f31dd0abee..b695ef151001e 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -4877,6 +4877,27 @@ union bpf_attr {
  *		Get the struct pt_regs associated with **task**.
  *	Return
  *		A pointer to struct pt_regs.
+ *
+ * long bpf_get_branch_snapshot(void *entries, u32 size, u64 flags)
+ *	Description
+ *		Get branch trace from hardware engines like Intel LBR. The
+ *		hardware engine is stopped shortly after the helper is
+ *		called. Therefore, the user need to filter branch entries
+ *		based on the actual use case. To capture branch trace
+ *		before the trigger point of the BPF program, the helper
+ *		should be called at the beginning of the BPF program.
+ *
+ *		The data is stored as struct perf_branch_entry into output
+ *		buffer *entries*. *size* is the size of *entries* in bytes.
+ *		*flags* is reserved for now and must be zero.
+ *
+ *	Return
+ *		On success, number of bytes written to *buf*. On error, a
+ *		negative value.
+ *
+ *		**-EINVAL** if *flags* is not zero.
+ *
+ *		**-ENOENT** if architecture does not support branch records.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -5055,6 +5076,7 @@ union bpf_attr {
 	FN(get_func_ip),		\
 	FN(get_attach_cookie),		\
 	FN(task_pt_regs),		\
+	FN(get_branch_snapshot),	\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v6 bpf-next 3/3] selftests/bpf: add test for bpf_get_branch_snapshot
  2021-09-07 20:27 [PATCH v6 bpf-next 0/3] bpf: introduce bpf_get_branch_snapshot Song Liu
  2021-09-07 20:28 ` [PATCH v6 bpf-next 1/3] perf: enable branch record for software events Song Liu
  2021-09-07 20:28 ` [PATCH v6 bpf-next 2/3] bpf: introduce helper bpf_get_branch_snapshot Song Liu
@ 2021-09-07 20:28 ` Song Liu
  2021-09-07 20:29 ` [PATCH v6 bpf-next 0/3] bpf: introduce bpf_get_branch_snapshot Song Liu
  3 siblings, 0 replies; 16+ messages in thread
From: Song Liu @ 2021-09-07 20:28 UTC (permalink / raw)
  To: bpf, linux-kernel
  Cc: acme, peterz, mingo, kjain, kernel-team, Song Liu,
	Andrii Nakryiko, John Fastabend

This test uses bpf_get_branch_snapshot from a fexit program. The test uses
a target function (bpf_testmod_loop_test) and compares the record against
kallsyms. If there isn't enough record matching kallsyms, the test fails.

Acked-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
---
 .../selftests/bpf/bpf_testmod/bpf_testmod.c   |  19 +++-
 .../selftests/bpf/prog_tests/core_reloc.c     |  14 +--
 .../bpf/prog_tests/get_branch_snapshot.c      | 100 ++++++++++++++++++
 .../selftests/bpf/prog_tests/module_attach.c  |  39 -------
 .../selftests/bpf/progs/get_branch_snapshot.c |  40 +++++++
 tools/testing/selftests/bpf/test_progs.c      |  39 +++++++
 tools/testing/selftests/bpf/test_progs.h      |   2 +
 tools/testing/selftests/bpf/trace_helpers.c   |  37 +++++++
 tools/testing/selftests/bpf/trace_helpers.h   |   5 +
 9 files changed, 243 insertions(+), 52 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c
 create mode 100644 tools/testing/selftests/bpf/progs/get_branch_snapshot.c

diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
index 141d8da687d21..50fc5561110a4 100644
--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
@@ -13,6 +13,18 @@
 
 DEFINE_PER_CPU(int, bpf_testmod_ksym_percpu) = 123;
 
+noinline int bpf_testmod_loop_test(int n)
+{
+	int i, sum = 0;
+
+	/* the primary goal of this test is to test LBR. Create a lot of
+	 * branches in the function, so we can catch it easily.
+	 */
+	for (i = 0; i < n; i++)
+		sum += i;
+	return sum;
+}
+
 noinline ssize_t
 bpf_testmod_test_read(struct file *file, struct kobject *kobj,
 		      struct bin_attribute *bin_attr,
@@ -24,7 +36,11 @@ bpf_testmod_test_read(struct file *file, struct kobject *kobj,
 		.len = len,
 	};
 
-	trace_bpf_testmod_test_read(current, &ctx);
+	/* This is always true. Use the check to make sure the compiler
+	 * doesn't remove bpf_testmod_loop_test.
+	 */
+	if (bpf_testmod_loop_test(101) > 100)
+		trace_bpf_testmod_test_read(current, &ctx);
 
 	return -EIO; /* always fail */
 }
@@ -71,4 +87,3 @@ module_exit(bpf_testmod_exit);
 MODULE_AUTHOR("Andrii Nakryiko");
 MODULE_DESCRIPTION("BPF selftests module");
 MODULE_LICENSE("Dual BSD/GPL");
-
diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
index 4739b15b2a979..15d355af8d1d2 100644
--- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
@@ -30,7 +30,7 @@ static int duration = 0;
 	.output_len = sizeof(struct core_reloc_module_output),		\
 	.prog_sec_name = sec_name,					\
 	.raw_tp_name = tp_name,						\
-	.trigger = trigger_module_test_read,				\
+	.trigger = __trigger_module_test_read,				\
 	.needs_testmod = true,						\
 }
 
@@ -475,19 +475,11 @@ static int setup_type_id_case_failure(struct core_reloc_test_case *test)
 	return 0;
 }
 
-static int trigger_module_test_read(const struct core_reloc_test_case *test)
+static int __trigger_module_test_read(const struct core_reloc_test_case *test)
 {
 	struct core_reloc_module_output *exp = (void *)test->output;
-	int fd, err;
-
-	fd = open("/sys/kernel/bpf_testmod", O_RDONLY);
-	err = -errno;
-	if (CHECK(fd < 0, "testmod_file_open", "failed: %d\n", err))
-		return err;
-
-	read(fd, NULL, exp->len); /* request expected number of bytes */
-	close(fd);
 
+	trigger_module_test_read(exp->len);
 	return 0;
 }
 
diff --git a/tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c b/tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c
new file mode 100644
index 0000000000000..26af9b3d572e3
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c
@@ -0,0 +1,100 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2021 Facebook */
+#include <test_progs.h>
+#include "get_branch_snapshot.skel.h"
+
+static int *pfd_array;
+static int cpu_cnt;
+
+static int create_perf_events(void)
+{
+	struct perf_event_attr attr = {0};
+	int cpu;
+
+	/* create perf event */
+	attr.size = sizeof(attr);
+	attr.type = PERF_TYPE_RAW;
+	attr.config = 0x1b00;
+	attr.sample_type = PERF_SAMPLE_BRANCH_STACK;
+	attr.branch_sample_type = PERF_SAMPLE_BRANCH_KERNEL |
+		PERF_SAMPLE_BRANCH_USER | PERF_SAMPLE_BRANCH_ANY;
+
+	cpu_cnt = libbpf_num_possible_cpus();
+	pfd_array = malloc(sizeof(int) * cpu_cnt);
+	if (!pfd_array) {
+		cpu_cnt = 0;
+		return 1;
+	}
+
+	for (cpu = 0; cpu < cpu_cnt; cpu++) {
+		pfd_array[cpu] = syscall(__NR_perf_event_open, &attr,
+					 -1, cpu, -1, PERF_FLAG_FD_CLOEXEC);
+		if (pfd_array[cpu] < 0)
+			break;
+	}
+
+	return cpu == 0;
+}
+
+static void close_perf_events(void)
+{
+	int cpu = 0;
+	int fd;
+
+	while (cpu++ < cpu_cnt) {
+		fd = pfd_array[cpu];
+		if (fd < 0)
+			break;
+		close(fd);
+	}
+	free(pfd_array);
+}
+
+void test_get_branch_snapshot(void)
+{
+	struct get_branch_snapshot *skel = NULL;
+	int err;
+
+	if (create_perf_events()) {
+		test__skip();  /* system doesn't support LBR */
+		goto cleanup;
+	}
+
+	skel = get_branch_snapshot__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "get_branch_snapshot__open_and_load"))
+		goto cleanup;
+
+	err = kallsyms_find("bpf_testmod_loop_test", &skel->bss->address_low);
+	if (!ASSERT_OK(err, "kallsyms_find"))
+		goto cleanup;
+
+	err = kallsyms_find_next("bpf_testmod_loop_test", &skel->bss->address_high);
+	if (!ASSERT_OK(err, "kallsyms_find_next"))
+		goto cleanup;
+
+	err = get_branch_snapshot__attach(skel);
+	if (!ASSERT_OK(err, "get_branch_snapshot__attach"))
+		goto cleanup;
+
+	trigger_module_test_read(100);
+
+	if (skel->bss->total_entries < 16) {
+		/* too few entries for the hit/waste test */
+		test__skip();
+		goto cleanup;
+	}
+
+	ASSERT_GT(skel->bss->test1_hits, 1, "find_looptest_in_lbr");
+
+	/* Given we stop LBR in software, we will waste a few entries.
+	 * But we should try to waste as few as possible entries. We are at
+	 * about 11 on x86_64 systems.
+	 * Add a check for < 15 so that we get heads-up when something
+	 * changes and wastes too many entries.
+	 */
+	ASSERT_LT(skel->bss->wasted_entries, 15, "check_wasted_entries");
+
+cleanup:
+	get_branch_snapshot__destroy(skel);
+	close_perf_events();
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/module_attach.c b/tools/testing/selftests/bpf/prog_tests/module_attach.c
index d85a69b7ce449..1797a6e4d6d84 100644
--- a/tools/testing/selftests/bpf/prog_tests/module_attach.c
+++ b/tools/testing/selftests/bpf/prog_tests/module_attach.c
@@ -6,45 +6,6 @@
 
 static int duration;
 
-static int trigger_module_test_read(int read_sz)
-{
-	int fd, err;
-
-	fd = open("/sys/kernel/bpf_testmod", O_RDONLY);
-	err = -errno;
-	if (CHECK(fd < 0, "testmod_file_open", "failed: %d\n", err))
-		return err;
-
-	read(fd, NULL, read_sz);
-	close(fd);
-
-	return 0;
-}
-
-static int trigger_module_test_write(int write_sz)
-{
-	int fd, err;
-	char *buf = malloc(write_sz);
-
-	if (!buf)
-		return -ENOMEM;
-
-	memset(buf, 'a', write_sz);
-	buf[write_sz-1] = '\0';
-
-	fd = open("/sys/kernel/bpf_testmod", O_WRONLY);
-	err = -errno;
-	if (CHECK(fd < 0, "testmod_file_open", "failed: %d\n", err)) {
-		free(buf);
-		return err;
-	}
-
-	write(fd, buf, write_sz);
-	close(fd);
-	free(buf);
-	return 0;
-}
-
 static int delete_module(const char *name, int flags)
 {
 	return syscall(__NR_delete_module, name, flags);
diff --git a/tools/testing/selftests/bpf/progs/get_branch_snapshot.c b/tools/testing/selftests/bpf/progs/get_branch_snapshot.c
new file mode 100644
index 0000000000000..a1b139888048c
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/get_branch_snapshot.c
@@ -0,0 +1,40 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2021 Facebook */
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+__u64 test1_hits = 0;
+__u64 address_low = 0;
+__u64 address_high = 0;
+int wasted_entries = 0;
+long total_entries = 0;
+
+#define ENTRY_CNT 32
+struct perf_branch_entry entries[ENTRY_CNT] = {};
+
+static inline bool in_range(__u64 val)
+{
+	return (val >= address_low) && (val < address_high);
+}
+
+SEC("fexit/bpf_testmod_loop_test")
+int BPF_PROG(test1, int n, int ret)
+{
+	long i;
+
+	total_entries = bpf_get_branch_snapshot(entries, sizeof(entries), 0);
+	total_entries /= sizeof(struct perf_branch_entry);
+
+	for (i = 0; i < ENTRY_CNT; i++) {
+		if (i >= total_entries)
+			break;
+		if (in_range(entries[i].from) && in_range(entries[i].to))
+			test1_hits++;
+		else if (!test1_hits)
+			wasted_entries++;
+	}
+	return 0;
+}
diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
index cc1cd240445d2..2ed01f615d20f 100644
--- a/tools/testing/selftests/bpf/test_progs.c
+++ b/tools/testing/selftests/bpf/test_progs.c
@@ -743,6 +743,45 @@ int cd_flavor_subdir(const char *exec_name)
 	return chdir(flavor);
 }
 
+int trigger_module_test_read(int read_sz)
+{
+	int fd, err;
+
+	fd = open("/sys/kernel/bpf_testmod", O_RDONLY);
+	err = -errno;
+	if (!ASSERT_GE(fd, 0, "testmod_file_open"))
+		return err;
+
+	read(fd, NULL, read_sz);
+	close(fd);
+
+	return 0;
+}
+
+int trigger_module_test_write(int write_sz)
+{
+	int fd, err;
+	char *buf = malloc(write_sz);
+
+	if (!buf)
+		return -ENOMEM;
+
+	memset(buf, 'a', write_sz);
+	buf[write_sz-1] = '\0';
+
+	fd = open("/sys/kernel/bpf_testmod", O_WRONLY);
+	err = -errno;
+	if (!ASSERT_GE(fd, 0, "testmod_file_open")) {
+		free(buf);
+		return err;
+	}
+
+	write(fd, buf, write_sz);
+	close(fd);
+	free(buf);
+	return 0;
+}
+
 #define MAX_BACKTRACE_SZ 128
 void crash_handler(int signum)
 {
diff --git a/tools/testing/selftests/bpf/test_progs.h b/tools/testing/selftests/bpf/test_progs.h
index c8c2bf878f67c..94bef0aa74cf5 100644
--- a/tools/testing/selftests/bpf/test_progs.h
+++ b/tools/testing/selftests/bpf/test_progs.h
@@ -291,6 +291,8 @@ int compare_map_keys(int map1_fd, int map2_fd);
 int compare_stack_ips(int smap_fd, int amap_fd, int stack_trace_len);
 int extract_build_id(char *build_id, size_t size);
 int kern_sync_rcu(void);
+int trigger_module_test_read(int read_sz);
+int trigger_module_test_write(int write_sz);
 
 #ifdef __x86_64__
 #define SYS_NANOSLEEP_KPROBE_NAME "__x64_sys_nanosleep"
diff --git a/tools/testing/selftests/bpf/trace_helpers.c b/tools/testing/selftests/bpf/trace_helpers.c
index e7a19b04d4eaf..5100a169b72b1 100644
--- a/tools/testing/selftests/bpf/trace_helpers.c
+++ b/tools/testing/selftests/bpf/trace_helpers.c
@@ -1,4 +1,5 @@
 // SPDX-License-Identifier: GPL-2.0
+#include <ctype.h>
 #include <stdio.h>
 #include <stdlib.h>
 #include <string.h>
@@ -117,6 +118,42 @@ int kallsyms_find(const char *sym, unsigned long long *addr)
 	return err;
 }
 
+/* find the address of the next symbol of the same type, this can be used
+ * to determine the end of a function.
+ */
+int kallsyms_find_next(const char *sym, unsigned long long *addr)
+{
+	char type, found_type, name[500];
+	unsigned long long value;
+	bool found = false;
+	int err = 0;
+	FILE *f;
+
+	f = fopen("/proc/kallsyms", "r");
+	if (!f)
+		return -EINVAL;
+
+	while (fscanf(f, "%llx %c %499s%*[^\n]\n", &value, &type, name) > 0) {
+		/* Different types of symbols in kernel modules are mixed
+		 * in /proc/kallsyms. Only return the next matching type.
+		 * Use tolower() for type so that 'T' matches 't'.
+		 */
+		if (found && found_type == tolower(type)) {
+			*addr = value;
+			goto out;
+		}
+		if (strcmp(name, sym) == 0) {
+			found = true;
+			found_type = tolower(type);
+		}
+	}
+	err = -ENOENT;
+
+out:
+	fclose(f);
+	return err;
+}
+
 void read_trace_pipe(void)
 {
 	int trace_fd;
diff --git a/tools/testing/selftests/bpf/trace_helpers.h b/tools/testing/selftests/bpf/trace_helpers.h
index d907b445524d5..bc8ed86105d94 100644
--- a/tools/testing/selftests/bpf/trace_helpers.h
+++ b/tools/testing/selftests/bpf/trace_helpers.h
@@ -16,6 +16,11 @@ long ksym_get_addr(const char *name);
 /* open kallsyms and find addresses on the fly, faster than load + search. */
 int kallsyms_find(const char *sym, unsigned long long *addr);
 
+/* find the address of the next symbol, this can be used to determine the
+ * end of a function
+ */
+int kallsyms_find_next(const char *sym, unsigned long long *addr);
+
 void read_trace_pipe(void);
 
 ssize_t get_uprobe_offset(const void *addr, ssize_t base);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 bpf-next 0/3] bpf: introduce bpf_get_branch_snapshot
  2021-09-07 20:27 [PATCH v6 bpf-next 0/3] bpf: introduce bpf_get_branch_snapshot Song Liu
                   ` (2 preceding siblings ...)
  2021-09-07 20:28 ` [PATCH v6 bpf-next 3/3] selftests/bpf: add test for bpf_get_branch_snapshot Song Liu
@ 2021-09-07 20:29 ` Song Liu
  2021-09-07 20:58   ` Andrii Nakryiko
  2021-09-09 21:53   ` Song Liu
  3 siblings, 2 replies; 16+ messages in thread
From: Song Liu @ 2021-09-07 20:29 UTC (permalink / raw)
  To: bpf, open list
  Cc: Arnaldo Carvalho de Melo, Peter Ziljstra, Ingo Molnar,
	Kajol Jain, Kernel Team



> On Sep 7, 2021, at 1:27 PM, Song Liu <songliubraving@fb.com> wrote:

Forgot to add changes:

Changes v5 => v6
1. Add local_irq_save/restore to intel_pmu_snapshot_branch_stack. 
   (Peter)
2. Remove buf and size check in bpf_get_branch_snapshot, move flags 
   check to later fo the function. (Peter, Andrii)
3. Revise comments for bpf_get_branch_snapshot in bpf.h (Andrii)

> 
> Changes v4 => v5
> 1. Modify perf_snapshot_branch_stack_t to save some memcpy. (Andrii)
> 2. Minor fixes in selftests. (Andrii)
> 
> Changes v3 => v4:
> 1. Do not reshuffle intel_pmu_disable_all(). Use some inline to save LBR
>   entries. (Peter)
> 2. Move static_call(perf_snapshot_branch_stack) to the helper. (Alexei)
> 3. Add argument flags to bpf_get_branch_snapshot. (Andrii)
> 4. Make MAX_BRANCH_SNAPSHOT an enum (Andrii). And rename it as
>   PERF_MAX_BRANCH_SNAPSHOT
> 5. Make bpf_get_branch_snapshot similar to bpf_read_branch_records.
>   (Andrii)
> 6. Move the test target function to bpf_testmod. Updated kallsyms_find_next
>   to work properly with modules. (Andrii)
> 
> Changes v2 => v3:
> 1. Fix the use of static_call. (Peter)
> 2. Limit the use to perfmon version >= 2. (Peter)
> 3. Modify intel_pmu_snapshot_branch_stack() to use intel_pmu_disable_all
>   and intel_pmu_enable_all().
> 
> Changes v1 => v2:
> 1. Rename the helper as bpf_get_branch_snapshot;
> 2. Fix/simplify the use of static_call;
> 3. Instead of percpu variables, let intel_pmu_snapshot_branch_stack output
>   branch records to an output argument of type perf_branch_snapshot.
> 
> Branch stack can be very useful in understanding software events. For
> example, when a long function, e.g. sys_perf_event_open, returns an errno,
> it is not obvious why the function failed. Branch stack could provide very
> helpful information in this type of scenarios.
> 
> This set adds support to read branch stack with a new BPF helper
> bpf_get_branch_trace(). Currently, this is only supported in Intel systems.
> It is also possible to support the same feaure for PowerPC.
> 
> The hardware that records the branch stace is not stopped automatically on
> software events. Therefore, it is necessary to stop it in software soon.
> Otherwise, the hardware buffers/registers will be flushed. One of the key
> design consideration in this set is to minimize the number of branch record
> entries between the event triggers and the hardware recorder is stopped.
> Based on this goal, current design is different from the discussions in
> original RFC [1]:
> 1) Static call is used when supported, to save function pointer
>    dereference;
> 2) intel_pmu_lbr_disable_all is used instead of perf_pmu_disable(),
>    because the latter uses about 10 entries before stopping LBR.
> 
> With current code, on Intel CPU, LBR is stopped after 10 branch entries
> after fexit triggers:
> 
> ID: 0 from intel_pmu_lbr_disable_all+58 to intel_pmu_lbr_disable_all+93
> ID: 1 from intel_pmu_lbr_disable_all+54 to intel_pmu_lbr_disable_all+58
> ID: 2 from intel_pmu_snapshot_branch_stack+102 to intel_pmu_lbr_disable_all+0
> ID: 3 from bpf_get_branch_snapshot+18 to intel_pmu_snapshot_branch_stack+0
> ID: 4 from bpf_get_branch_snapshot+18 to bpf_get_branch_snapshot+0
> ID: 5 from __brk_limit+474918983 to bpf_get_branch_snapshot+0
> ID: 6 from __bpf_prog_enter+34 to __brk_limit+474918971
> ID: 7 from migrate_disable+60 to __bpf_prog_enter+9
> ID: 8 from __bpf_prog_enter+4 to migrate_disable+0
> ID: 9 from bpf_testmod_loop_test+20 to __bpf_prog_enter+0
> ID: 10 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
> ID: 11 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
> ID: 12 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
> ID: 13 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
> ...
> 
> [1] https://lore.kernel.org/bpf/20210818012937.2522409-1-songliubraving@fb.com/
> 
> Song Liu (3):
>  perf: enable branch record for software events
>  bpf: introduce helper bpf_get_branch_snapshot
>  selftests/bpf: add test for bpf_get_branch_snapshot
> 
> arch/x86/events/intel/core.c                  |  29 ++++-
> arch/x86/events/intel/ds.c                    |   8 --
> arch/x86/events/perf_event.h                  |  10 +-
> include/linux/perf_event.h                    |  23 ++++
> include/uapi/linux/bpf.h                      |  22 ++++
> kernel/bpf/trampoline.c                       |   3 +-
> kernel/events/core.c                          |   2 +
> kernel/trace/bpf_trace.c                      |  30 ++++++
> tools/include/uapi/linux/bpf.h                |  22 ++++
> .../selftests/bpf/bpf_testmod/bpf_testmod.c   |  19 +++-
> .../selftests/bpf/prog_tests/core_reloc.c     |  14 +--
> .../bpf/prog_tests/get_branch_snapshot.c      | 100 ++++++++++++++++++
> .../selftests/bpf/prog_tests/module_attach.c  |  39 -------
> .../selftests/bpf/progs/get_branch_snapshot.c |  40 +++++++
> tools/testing/selftests/bpf/test_progs.c      |  39 +++++++
> tools/testing/selftests/bpf/test_progs.h      |   2 +
> tools/testing/selftests/bpf/trace_helpers.c   |  37 +++++++
> tools/testing/selftests/bpf/trace_helpers.h   |   5 +
> 18 files changed, 378 insertions(+), 66 deletions(-)
> create mode 100644 tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c
> create mode 100644 tools/testing/selftests/bpf/progs/get_branch_snapshot.c
> 
> --
> 2.30.2


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 bpf-next 0/3] bpf: introduce bpf_get_branch_snapshot
  2021-09-07 20:29 ` [PATCH v6 bpf-next 0/3] bpf: introduce bpf_get_branch_snapshot Song Liu
@ 2021-09-07 20:58   ` Andrii Nakryiko
  2021-09-09 21:53   ` Song Liu
  1 sibling, 0 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2021-09-07 20:58 UTC (permalink / raw)
  To: Song Liu
  Cc: bpf, open list, Arnaldo Carvalho de Melo, Peter Ziljstra,
	Ingo Molnar, Kajol Jain, Kernel Team

On Tue, Sep 7, 2021 at 1:31 PM Song Liu <songliubraving@fb.com> wrote:
>
>
>
> > On Sep 7, 2021, at 1:27 PM, Song Liu <songliubraving@fb.com> wrote:
>
> Forgot to add changes:
>
> Changes v5 => v6
> 1. Add local_irq_save/restore to intel_pmu_snapshot_branch_stack.
>    (Peter)
> 2. Remove buf and size check in bpf_get_branch_snapshot, move flags
>    check to later fo the function. (Peter, Andrii)
> 3. Revise comments for bpf_get_branch_snapshot in bpf.h (Andrii)
>

Looks great, thanks! Looking forward to being able to use it. Please
consider following up with migrate_disable() inlining as well.

For the series:

Acked-by: Andrii Nakryiko <andrii@kernel.org>

> >
> > Changes v4 => v5
> > 1. Modify perf_snapshot_branch_stack_t to save some memcpy. (Andrii)
> > 2. Minor fixes in selftests. (Andrii)
> >
> > Changes v3 => v4:
> > 1. Do not reshuffle intel_pmu_disable_all(). Use some inline to save LBR
> >   entries. (Peter)
> > 2. Move static_call(perf_snapshot_branch_stack) to the helper. (Alexei)
> > 3. Add argument flags to bpf_get_branch_snapshot. (Andrii)
> > 4. Make MAX_BRANCH_SNAPSHOT an enum (Andrii). And rename it as
> >   PERF_MAX_BRANCH_SNAPSHOT
> > 5. Make bpf_get_branch_snapshot similar to bpf_read_branch_records.
> >   (Andrii)
> > 6. Move the test target function to bpf_testmod. Updated kallsyms_find_next
> >   to work properly with modules. (Andrii)
> >
> > Changes v2 => v3:
> > 1. Fix the use of static_call. (Peter)
> > 2. Limit the use to perfmon version >= 2. (Peter)
> > 3. Modify intel_pmu_snapshot_branch_stack() to use intel_pmu_disable_all
> >   and intel_pmu_enable_all().
> >
> > Changes v1 => v2:
> > 1. Rename the helper as bpf_get_branch_snapshot;
> > 2. Fix/simplify the use of static_call;
> > 3. Instead of percpu variables, let intel_pmu_snapshot_branch_stack output
> >   branch records to an output argument of type perf_branch_snapshot.
> >
> > Branch stack can be very useful in understanding software events. For
> > example, when a long function, e.g. sys_perf_event_open, returns an errno,
> > it is not obvious why the function failed. Branch stack could provide very
> > helpful information in this type of scenarios.
> >
> > This set adds support to read branch stack with a new BPF helper
> > bpf_get_branch_trace(). Currently, this is only supported in Intel systems.
> > It is also possible to support the same feaure for PowerPC.
> >
> > The hardware that records the branch stace is not stopped automatically on
> > software events. Therefore, it is necessary to stop it in software soon.
> > Otherwise, the hardware buffers/registers will be flushed. One of the key
> > design consideration in this set is to minimize the number of branch record
> > entries between the event triggers and the hardware recorder is stopped.
> > Based on this goal, current design is different from the discussions in
> > original RFC [1]:
> > 1) Static call is used when supported, to save function pointer
> >    dereference;
> > 2) intel_pmu_lbr_disable_all is used instead of perf_pmu_disable(),
> >    because the latter uses about 10 entries before stopping LBR.
> >
> > With current code, on Intel CPU, LBR is stopped after 10 branch entries
> > after fexit triggers:
> >
> > ID: 0 from intel_pmu_lbr_disable_all+58 to intel_pmu_lbr_disable_all+93
> > ID: 1 from intel_pmu_lbr_disable_all+54 to intel_pmu_lbr_disable_all+58
> > ID: 2 from intel_pmu_snapshot_branch_stack+102 to intel_pmu_lbr_disable_all+0
> > ID: 3 from bpf_get_branch_snapshot+18 to intel_pmu_snapshot_branch_stack+0
> > ID: 4 from bpf_get_branch_snapshot+18 to bpf_get_branch_snapshot+0
> > ID: 5 from __brk_limit+474918983 to bpf_get_branch_snapshot+0
> > ID: 6 from __bpf_prog_enter+34 to __brk_limit+474918971
> > ID: 7 from migrate_disable+60 to __bpf_prog_enter+9
> > ID: 8 from __bpf_prog_enter+4 to migrate_disable+0
> > ID: 9 from bpf_testmod_loop_test+20 to __bpf_prog_enter+0
> > ID: 10 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
> > ID: 11 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
> > ID: 12 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
> > ID: 13 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
> > ...
> >
> > [1] https://lore.kernel.org/bpf/20210818012937.2522409-1-songliubraving@fb.com/
> >
> > Song Liu (3):
> >  perf: enable branch record for software events
> >  bpf: introduce helper bpf_get_branch_snapshot
> >  selftests/bpf: add test for bpf_get_branch_snapshot
> >
> > arch/x86/events/intel/core.c                  |  29 ++++-
> > arch/x86/events/intel/ds.c                    |   8 --
> > arch/x86/events/perf_event.h                  |  10 +-
> > include/linux/perf_event.h                    |  23 ++++
> > include/uapi/linux/bpf.h                      |  22 ++++
> > kernel/bpf/trampoline.c                       |   3 +-
> > kernel/events/core.c                          |   2 +
> > kernel/trace/bpf_trace.c                      |  30 ++++++
> > tools/include/uapi/linux/bpf.h                |  22 ++++
> > .../selftests/bpf/bpf_testmod/bpf_testmod.c   |  19 +++-
> > .../selftests/bpf/prog_tests/core_reloc.c     |  14 +--
> > .../bpf/prog_tests/get_branch_snapshot.c      | 100 ++++++++++++++++++
> > .../selftests/bpf/prog_tests/module_attach.c  |  39 -------
> > .../selftests/bpf/progs/get_branch_snapshot.c |  40 +++++++
> > tools/testing/selftests/bpf/test_progs.c      |  39 +++++++
> > tools/testing/selftests/bpf/test_progs.h      |   2 +
> > tools/testing/selftests/bpf/trace_helpers.c   |  37 +++++++
> > tools/testing/selftests/bpf/trace_helpers.h   |   5 +
> > 18 files changed, 378 insertions(+), 66 deletions(-)
> > create mode 100644 tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c
> > create mode 100644 tools/testing/selftests/bpf/progs/get_branch_snapshot.c
> >
> > --
> > 2.30.2
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 bpf-next 0/3] bpf: introduce bpf_get_branch_snapshot
  2021-09-07 20:29 ` [PATCH v6 bpf-next 0/3] bpf: introduce bpf_get_branch_snapshot Song Liu
  2021-09-07 20:58   ` Andrii Nakryiko
@ 2021-09-09 21:53   ` Song Liu
  1 sibling, 0 replies; 16+ messages in thread
From: Song Liu @ 2021-09-09 21:53 UTC (permalink / raw)
  To: bpf, open list, Peter Ziljstra
  Cc: Arnaldo Carvalho de Melo, Ingo Molnar, Kajol Jain, Kernel Team

Hi Peter, 

Do you have further comments/concerns on v6? If not, could you please 
reply with your Reviewed-by or Acked-by?

Thanks,
Song

> On Sep 7, 2021, at 1:29 PM, Song Liu <songliubraving@fb.com> wrote:
> 
> 
> 
>> On Sep 7, 2021, at 1:27 PM, Song Liu <songliubraving@fb.com> wrote:
> 
> Forgot to add changes:
> 
> Changes v5 => v6
> 1. Add local_irq_save/restore to intel_pmu_snapshot_branch_stack. 
>   (Peter)
> 2. Remove buf and size check in bpf_get_branch_snapshot, move flags 
>   check to later fo the function. (Peter, Andrii)
> 3. Revise comments for bpf_get_branch_snapshot in bpf.h (Andrii)
> 
>> 
>> Changes v4 => v5
>> 1. Modify perf_snapshot_branch_stack_t to save some memcpy. (Andrii)
>> 2. Minor fixes in selftests. (Andrii)
>> 
>> Changes v3 => v4:
>> 1. Do not reshuffle intel_pmu_disable_all(). Use some inline to save LBR
>>  entries. (Peter)
>> 2. Move static_call(perf_snapshot_branch_stack) to the helper. (Alexei)
>> 3. Add argument flags to bpf_get_branch_snapshot. (Andrii)
>> 4. Make MAX_BRANCH_SNAPSHOT an enum (Andrii). And rename it as
>>  PERF_MAX_BRANCH_SNAPSHOT
>> 5. Make bpf_get_branch_snapshot similar to bpf_read_branch_records.
>>  (Andrii)
>> 6. Move the test target function to bpf_testmod. Updated kallsyms_find_next
>>  to work properly with modules. (Andrii)
>> 
>> Changes v2 => v3:
>> 1. Fix the use of static_call. (Peter)
>> 2. Limit the use to perfmon version >= 2. (Peter)
>> 3. Modify intel_pmu_snapshot_branch_stack() to use intel_pmu_disable_all
>>  and intel_pmu_enable_all().
>> 
>> Changes v1 => v2:
>> 1. Rename the helper as bpf_get_branch_snapshot;
>> 2. Fix/simplify the use of static_call;
>> 3. Instead of percpu variables, let intel_pmu_snapshot_branch_stack output
>>  branch records to an output argument of type perf_branch_snapshot.
>> 
>> Branch stack can be very useful in understanding software events. For
>> example, when a long function, e.g. sys_perf_event_open, returns an errno,
>> it is not obvious why the function failed. Branch stack could provide very
>> helpful information in this type of scenarios.
>> 
>> This set adds support to read branch stack with a new BPF helper
>> bpf_get_branch_trace(). Currently, this is only supported in Intel systems.
>> It is also possible to support the same feaure for PowerPC.
>> 
>> The hardware that records the branch stace is not stopped automatically on
>> software events. Therefore, it is necessary to stop it in software soon.
>> Otherwise, the hardware buffers/registers will be flushed. One of the key
>> design consideration in this set is to minimize the number of branch record
>> entries between the event triggers and the hardware recorder is stopped.
>> Based on this goal, current design is different from the discussions in
>> original RFC [1]:
>> 1) Static call is used when supported, to save function pointer
>>   dereference;
>> 2) intel_pmu_lbr_disable_all is used instead of perf_pmu_disable(),
>>   because the latter uses about 10 entries before stopping LBR.
>> 
>> With current code, on Intel CPU, LBR is stopped after 10 branch entries
>> after fexit triggers:
>> 
>> ID: 0 from intel_pmu_lbr_disable_all+58 to intel_pmu_lbr_disable_all+93
>> ID: 1 from intel_pmu_lbr_disable_all+54 to intel_pmu_lbr_disable_all+58
>> ID: 2 from intel_pmu_snapshot_branch_stack+102 to intel_pmu_lbr_disable_all+0
>> ID: 3 from bpf_get_branch_snapshot+18 to intel_pmu_snapshot_branch_stack+0
>> ID: 4 from bpf_get_branch_snapshot+18 to bpf_get_branch_snapshot+0
>> ID: 5 from __brk_limit+474918983 to bpf_get_branch_snapshot+0
>> ID: 6 from __bpf_prog_enter+34 to __brk_limit+474918971
>> ID: 7 from migrate_disable+60 to __bpf_prog_enter+9
>> ID: 8 from __bpf_prog_enter+4 to migrate_disable+0
>> ID: 9 from bpf_testmod_loop_test+20 to __bpf_prog_enter+0
>> ID: 10 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
>> ID: 11 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
>> ID: 12 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
>> ID: 13 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
>> ...
>> 
>> [1] https://lore.kernel.org/bpf/20210818012937.2522409-1-songliubraving@fb.com/
>> 
>> Song Liu (3):
>> perf: enable branch record for software events
>> bpf: introduce helper bpf_get_branch_snapshot
>> selftests/bpf: add test for bpf_get_branch_snapshot
>> 
>> arch/x86/events/intel/core.c                  |  29 ++++-
>> arch/x86/events/intel/ds.c                    |   8 --
>> arch/x86/events/perf_event.h                  |  10 +-
>> include/linux/perf_event.h                    |  23 ++++
>> include/uapi/linux/bpf.h                      |  22 ++++
>> kernel/bpf/trampoline.c                       |   3 +-
>> kernel/events/core.c                          |   2 +
>> kernel/trace/bpf_trace.c                      |  30 ++++++
>> tools/include/uapi/linux/bpf.h                |  22 ++++
>> .../selftests/bpf/bpf_testmod/bpf_testmod.c   |  19 +++-
>> .../selftests/bpf/prog_tests/core_reloc.c     |  14 +--
>> .../bpf/prog_tests/get_branch_snapshot.c      | 100 ++++++++++++++++++
>> .../selftests/bpf/prog_tests/module_attach.c  |  39 -------
>> .../selftests/bpf/progs/get_branch_snapshot.c |  40 +++++++
>> tools/testing/selftests/bpf/test_progs.c      |  39 +++++++
>> tools/testing/selftests/bpf/test_progs.h      |   2 +
>> tools/testing/selftests/bpf/trace_helpers.c   |  37 +++++++
>> tools/testing/selftests/bpf/trace_helpers.h   |   5 +
>> 18 files changed, 378 insertions(+), 66 deletions(-)
>> create mode 100644 tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c
>> create mode 100644 tools/testing/selftests/bpf/progs/get_branch_snapshot.c
>> 
>> --
>> 2.30.2
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 bpf-next 1/3] perf: enable branch record for software events
  2021-09-07 20:28 ` [PATCH v6 bpf-next 1/3] perf: enable branch record for software events Song Liu
@ 2021-09-10 10:40   ` Peter Zijlstra
  2021-09-10 13:54     ` Peter Zijlstra
  0 siblings, 1 reply; 16+ messages in thread
From: Peter Zijlstra @ 2021-09-10 10:40 UTC (permalink / raw)
  To: Song Liu
  Cc: bpf, linux-kernel, acme, mingo, kjain, kernel-team,
	John Fastabend, Andrii Nakryiko

On Tue, Sep 07, 2021 at 01:28:00PM -0700, Song Liu wrote:

> +static int
> +intel_pmu_snapshot_branch_stack(struct perf_branch_entry *entries, unsigned int cnt)
> +{
> +	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> +	unsigned long flags;
> +
> +	local_irq_save(flags);
> +	intel_pmu_disable_all();
> +	intel_pmu_lbr_read();
> +	cnt = min_t(unsigned int, cnt, x86_pmu.lbr_nr);
> +
> +	memcpy(entries, cpuc->lbr_entries, sizeof(struct perf_branch_entry) * cnt);
> +	intel_pmu_enable_all(0);
> +	local_irq_restore(flags);
> +	return cnt;
> +}

So elsewhere you state:

+       /* Given we stop LBR in software, we will waste a few entries.
+        * But we should try to waste as few as possible entries. We are at
+        * about 11 on x86_64 systems.
+        * Add a check for < 15 so that we get heads-up when something
+        * changes and wastes too many entries.
+        */
+       ASSERT_LT(skel->bss->wasted_entries, 15, "check_wasted_entries");

Which is atrocious.. so I disassembled the new function to get horrible
crap. The below seems to cure that.

---
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -2143,19 +2143,19 @@ static __initconst const u64 knl_hw_cach
  * However, there are some cases which may change PEBS status, e.g. PMI
  * throttle. The PEBS_ENABLE should be updated where the status changes.
  */
-static __always_inline void __intel_pmu_disable_all(void)
+static __always_inline void __intel_pmu_disable_all(bool bts)
 {
 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
 
 	wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
-	if (test_bit(INTEL_PMC_IDX_FIXED_BTS, cpuc->active_mask))
+	if (bts && test_bit(INTEL_PMC_IDX_FIXED_BTS, cpuc->active_mask))
 		intel_pmu_disable_bts();
 }
 
 static __always_inline void intel_pmu_disable_all(void)
 {
-	__intel_pmu_disable_all();
+	__intel_pmu_disable_all(true);
 	intel_pmu_pebs_disable_all();
 	intel_pmu_lbr_disable_all();
 }
@@ -2186,14 +2186,12 @@ static void intel_pmu_enable_all(int add
 	__intel_pmu_enable_all(added, false);
 }
 
-static int
-intel_pmu_snapshot_branch_stack(struct perf_branch_entry *entries, unsigned int cnt)
+static noinline int
+__intel_pmu_snapshot_branch_stack(struct perf_branch_entry *entries,
+				  unsigned int cnt, unsigned long flags)
 {
 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
-	unsigned long flags;
 
-	local_irq_save(flags);
-	intel_pmu_disable_all();
 	intel_pmu_lbr_read();
 	cnt = min_t(unsigned int, cnt, x86_pmu.lbr_nr);
 
@@ -2203,6 +2201,36 @@ intel_pmu_snapshot_branch_stack(struct p
 	return cnt;
 }
 
+static int
+intel_pmu_snapshot_branch_stack(struct perf_branch_entry *entries, unsigned int cnt)
+{
+	unsigned long flags;
+
+	/* must not have branches... */
+	local_irq_save(flags);
+	__intel_pmu_disable_all(false); /* we don't care about BTS */
+	__intel_pmu_pebs_disable_all();
+	__intel_pmu_lbr_disable();
+	/*            ... until here */
+
+	return __intel_pmu_snapshot_branch_stack(entries, cnt, flags);
+}
+
+static int
+intel_pmu_snapshot_arch_branch_stack(struct perf_branch_entry *entries, unsigned int cnt)
+{
+	unsigned long flags;
+
+	/* must not have branches... */
+	local_irq_save(flags);
+	__intel_pmu_disable_all(false); /* we don't care about BTS */
+	__intel_pmu_pebs_disable_all();
+	__intel_pmu_arch_lbr_disable();
+	/*            ... until here */
+
+	return __intel_pmu_snapshot_branch_stack(entries, cnt, flags);
+}
+
 /*
  * Workaround for:
  *   Intel Errata AAK100 (model 26)
@@ -2946,7 +2974,7 @@ static int intel_pmu_handle_irq(struct p
 		apic_write(APIC_LVTPC, APIC_DM_NMI);
 	intel_bts_disable_local();
 	cpuc->enabled = 0;
-	__intel_pmu_disable_all();
+	__intel_pmu_disable_all(true);
 	handled = intel_pmu_drain_bts_buffer();
 	handled += intel_bts_interrupt();
 	status = intel_pmu_get_status();
@@ -6304,9 +6332,15 @@ __init int intel_pmu_init(void)
 		pr_cont("%d-deep LBR, ", x86_pmu.lbr_nr);
 
 		/* only support branch_stack snapshot for perfmon >= v2 */
-		if (x86_pmu.disable_all == intel_pmu_disable_all)
-			static_call_update(perf_snapshot_branch_stack,
-					   intel_pmu_snapshot_branch_stack);
+		if (x86_pmu.disable_all == intel_pmu_disable_all) {
+			if (boot_cpu_has(X86_FEATURE_ARCH_LBR)) {
+				static_call_update(perf_snapshot_branch_stack,
+						   intel_pmu_snapshot_arch_branch_stack);
+			} else {
+				static_call_update(perf_snapshot_branch_stack,
+						   intel_pmu_snapshot_branch_stack);
+			}
+		}
 	}
 
 	intel_pmu_check_extra_regs(x86_pmu.extra_regs);
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -1296,6 +1296,14 @@ void intel_pmu_pebs_enable_all(void)
 		wrmsrl(MSR_IA32_PEBS_ENABLE, cpuc->pebs_enabled);
 }
 
+void intel_pmu_pebs_disable_all(void)
+{
+	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+
+	if (cpuc->pebs_enabled)
+		__intel_pmu_pebs_disable_all();
+}
+
 static int intel_pmu_pebs_fixup_ip(struct pt_regs *regs)
 {
 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -1240,12 +1240,23 @@ static inline bool intel_pmu_has_bts(str
 	return intel_pmu_has_bts_period(event, hwc->sample_period);
 }
 
-static __always_inline void intel_pmu_pebs_disable_all(void)
+static __always_inline void __intel_pmu_pebs_disable_all(void)
 {
-	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+	wrmsrl(MSR_IA32_PEBS_ENABLE, 0);
+}
 
-	if (cpuc->pebs_enabled)
-		wrmsrl(MSR_IA32_PEBS_ENABLE, 0);
+static __always_inline void __intel_pmu_arch_lbr_disable(void)
+{
+	wrmsrl(MSR_ARCH_LBR_CTL, 0);
+}
+
+static __always_inline void __intel_pmu_lbr_disable(void)
+{
+	u64 debugctl;
+
+	rdmsrl(MSR_IA32_DEBUGCTLMSR, debugctl);
+	debugctl &= ~(DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI);
+	wrmsrl(MSR_IA32_DEBUGCTLMSR, debugctl);
 }
 
 int intel_pmu_save_and_restart(struct perf_event *event);
@@ -1322,6 +1333,8 @@ void intel_pmu_pebs_disable(struct perf_
 
 void intel_pmu_pebs_enable_all(void);
 
+void intel_pmu_pebs_disable_all(void);
+
 void intel_pmu_pebs_sched_task(struct perf_event_context *ctx, bool sched_in);
 
 void intel_pmu_auto_reload_read(struct perf_event *event);


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 bpf-next 1/3] perf: enable branch record for software events
  2021-09-10 10:40   ` Peter Zijlstra
@ 2021-09-10 13:54     ` Peter Zijlstra
  2021-09-10 18:27       ` Song Liu
  0 siblings, 1 reply; 16+ messages in thread
From: Peter Zijlstra @ 2021-09-10 13:54 UTC (permalink / raw)
  To: Song Liu
  Cc: bpf, linux-kernel, acme, mingo, kjain, kernel-team,
	John Fastabend, Andrii Nakryiko

On Fri, Sep 10, 2021 at 12:40:51PM +0200, Peter Zijlstra wrote:

> The below seems to cure that.

Seems I lost a hunk, fold below.

diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
index 9e6d6eaeb4cb..6b72e9b55c69 100644
--- a/arch/x86/events/intel/lbr.c
+++ b/arch/x86/events/intel/lbr.c
@@ -228,20 +228,6 @@ static void __intel_pmu_lbr_enable(bool pmi)
 		wrmsrl(MSR_ARCH_LBR_CTL, lbr_select | ARCH_LBR_CTL_LBREN);
 }
 
-static void __intel_pmu_lbr_disable(void)
-{
-	u64 debugctl;
-
-	if (static_cpu_has(X86_FEATURE_ARCH_LBR)) {
-		wrmsrl(MSR_ARCH_LBR_CTL, 0);
-		return;
-	}
-
-	rdmsrl(MSR_IA32_DEBUGCTLMSR, debugctl);
-	debugctl &= ~(DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI);
-	wrmsrl(MSR_IA32_DEBUGCTLMSR, debugctl);
-}
-
 void intel_pmu_lbr_reset_32(void)
 {
 	int i;
@@ -779,8 +765,12 @@ void intel_pmu_lbr_disable_all(void)
 {
 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
 
-	if (cpuc->lbr_users && !vlbr_exclude_host())
+	if (cpuc->lbr_users && !vlbr_exclude_host()) {
+		if (static_cpu_has(X86_FEATURE_ARCH_LBR))
+			return __intel_pmu_arch_lbr_disable();
+
 		__intel_pmu_lbr_disable();
+	}
 }
 
 void intel_pmu_lbr_read_32(struct cpu_hw_events *cpuc)

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 bpf-next 1/3] perf: enable branch record for software events
  2021-09-10 13:54     ` Peter Zijlstra
@ 2021-09-10 18:27       ` Song Liu
  2021-09-10 18:40         ` Peter Zijlstra
  0 siblings, 1 reply; 16+ messages in thread
From: Song Liu @ 2021-09-10 18:27 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: bpf, linux-kernel, acme, mingo, kjain, Kernel Team,
	John Fastabend, Andrii Nakryiko



> On Sep 10, 2021, at 6:54 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> 
> On Fri, Sep 10, 2021 at 12:40:51PM +0200, Peter Zijlstra wrote:
> 
>> The below seems to cure that.
> 
> Seems I lost a hunk, fold below.
> 
> diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
> index 9e6d6eaeb4cb..6b72e9b55c69 100644
> --- a/arch/x86/events/intel/lbr.c
> +++ b/arch/x86/events/intel/lbr.c
> @@ -228,20 +228,6 @@ static void __intel_pmu_lbr_enable(bool pmi)
> 		wrmsrl(MSR_ARCH_LBR_CTL, lbr_select | ARCH_LBR_CTL_LBREN);
> }
> 
> -static void __intel_pmu_lbr_disable(void)
> -{
> -	u64 debugctl;
> -
> -	if (static_cpu_has(X86_FEATURE_ARCH_LBR)) {
> -		wrmsrl(MSR_ARCH_LBR_CTL, 0);
> -		return;
> -	}
> -
> -	rdmsrl(MSR_IA32_DEBUGCTLMSR, debugctl);
> -	debugctl &= ~(DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI);
> -	wrmsrl(MSR_IA32_DEBUGCTLMSR, debugctl);
> -}
> -
> void intel_pmu_lbr_reset_32(void)
> {
> 	int i;
> @@ -779,8 +765,12 @@ void intel_pmu_lbr_disable_all(void)
> {
> 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> 
> -	if (cpuc->lbr_users && !vlbr_exclude_host())
> +	if (cpuc->lbr_users && !vlbr_exclude_host()) {
> +		if (static_cpu_has(X86_FEATURE_ARCH_LBR))
> +			return __intel_pmu_arch_lbr_disable();
> +
> 		__intel_pmu_lbr_disable();
> +	}
> }
> 
> void intel_pmu_lbr_read_32(struct cpu_hw_events *cpuc)

This works great and saves 3 entries! We have the following now:

ID: 0 from bpf_get_branch_snapshot+18 to intel_pmu_snapshot_branch_stack+0
ID: 1 from __brk_limit+477143934 to bpf_get_branch_snapshot+0
ID: 2 from __brk_limit+477192263 to __brk_limit+477143880  # trampoline 
ID: 3 from __bpf_prog_enter+34 to __brk_limit+477192251
ID: 4 from migrate_disable+60 to __bpf_prog_enter+9
ID: 5 from __bpf_prog_enter+4 to migrate_disable+0
ID: 6 from bpf_testmod_loop_test+20 to __bpf_prog_enter+0
ID: 7 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
ID: 8 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13

I will fold this in and send v7. 

Thanks,
Song


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 bpf-next 1/3] perf: enable branch record for software events
  2021-09-10 18:27       ` Song Liu
@ 2021-09-10 18:40         ` Peter Zijlstra
  2021-09-10 18:50           ` Peter Zijlstra
  2021-09-10 18:51           ` Song Liu
  0 siblings, 2 replies; 16+ messages in thread
From: Peter Zijlstra @ 2021-09-10 18:40 UTC (permalink / raw)
  To: Song Liu
  Cc: bpf, linux-kernel, acme, mingo, kjain, Kernel Team,
	John Fastabend, Andrii Nakryiko

On Fri, Sep 10, 2021 at 06:27:36PM +0000, Song Liu wrote:

> This works great and saves 3 entries! We have the following now:

Yay!

> ID: 0 from bpf_get_branch_snapshot+18 to intel_pmu_snapshot_branch_stack+0

is unavoidable, we need to end up in intel_pmu_snapshot_branch_stack()
eventually.

> ID: 1 from __brk_limit+477143934 to bpf_get_branch_snapshot+0

could be elided by having the JIT emit the call to
intel_pmu_snapshot_branch_stack directly, instead of laundering it
through that helper I suppose.

> ID: 2 from __brk_limit+477192263 to __brk_limit+477143880  # trampoline 
> ID: 3 from __bpf_prog_enter+34 to __brk_limit+477192251

-ENOCLUE

> ID: 4 from migrate_disable+60 to __bpf_prog_enter+9
> ID: 5 from __bpf_prog_enter+4 to migrate_disable+0

I suppose we can reduce that to a single branch if we inline
migrate_disable() here, that thing unfortunately needs one branch
itself.

> ID: 6 from bpf_testmod_loop_test+20 to __bpf_prog_enter+0

And this is the first branch out of the test program, giving 7 entries
now, of which we can remove at least 2 more with a bit of elbow greace,
right?

> ID: 7 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
> ID: 8 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
> 
> I will fold this in and send v7. 

Excellent.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 bpf-next 1/3] perf: enable branch record for software events
  2021-09-10 18:40         ` Peter Zijlstra
@ 2021-09-10 18:50           ` Peter Zijlstra
  2021-09-10 19:00             ` Song Liu
  2021-09-10 18:51           ` Song Liu
  1 sibling, 1 reply; 16+ messages in thread
From: Peter Zijlstra @ 2021-09-10 18:50 UTC (permalink / raw)
  To: Song Liu
  Cc: bpf, linux-kernel, acme, mingo, kjain, Kernel Team,
	John Fastabend, Andrii Nakryiko

On Fri, Sep 10, 2021 at 08:40:27PM +0200, Peter Zijlstra wrote:
> On Fri, Sep 10, 2021 at 06:27:36PM +0000, Song Liu wrote:
> 
> > This works great and saves 3 entries! We have the following now:
> 
> Yay!
> 
> > ID: 0 from bpf_get_branch_snapshot+18 to intel_pmu_snapshot_branch_stack+0
> 
> is unavoidable, we need to end up in intel_pmu_snapshot_branch_stack()
> eventually.
> 
> > ID: 1 from __brk_limit+477143934 to bpf_get_branch_snapshot+0
> 
> could be elided by having the JIT emit the call to
> intel_pmu_snapshot_branch_stack directly, instead of laundering it
> through that helper I suppose.
> 
> > ID: 2 from __brk_limit+477192263 to __brk_limit+477143880  # trampoline 
> > ID: 3 from __bpf_prog_enter+34 to __brk_limit+477192251
> 
> -ENOCLUE
> 
> > ID: 4 from migrate_disable+60 to __bpf_prog_enter+9
> > ID: 5 from __bpf_prog_enter+4 to migrate_disable+0
> 
> I suppose we can reduce that to a single branch if we inline
> migrate_disable() here, that thing unfortunately needs one branch
> itself.

Oooh, since we put local_irq_save/restore() in
intel_pmu_snapshot_branch_stack(), we no longer need to be after
migrate_disable(). You could go back to placing it earlier!

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 bpf-next 1/3] perf: enable branch record for software events
  2021-09-10 18:40         ` Peter Zijlstra
  2021-09-10 18:50           ` Peter Zijlstra
@ 2021-09-10 18:51           ` Song Liu
  1 sibling, 0 replies; 16+ messages in thread
From: Song Liu @ 2021-09-10 18:51 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: bpf, linux-kernel, acme, mingo, kjain, Kernel Team,
	John Fastabend, Andrii Nakryiko



> On Sep 10, 2021, at 11:40 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> 
> On Fri, Sep 10, 2021 at 06:27:36PM +0000, Song Liu wrote:
> 
>> This works great and saves 3 entries! We have the following now:
> 
> Yay!
> 
>> ID: 0 from bpf_get_branch_snapshot+18 to intel_pmu_snapshot_branch_stack+0
> 
> is unavoidable, we need to end up in intel_pmu_snapshot_branch_stack()
> eventually.
> 
>> ID: 1 from __brk_limit+477143934 to bpf_get_branch_snapshot+0
> 
> could be elided by having the JIT emit the call to
> intel_pmu_snapshot_branch_stack directly, instead of laundering it
> through that helper I suppose.

Yep, some JIT magic could save one entry here. 

> 
>> ID: 2 from __brk_limit+477192263 to __brk_limit+477143880  # trampoline 
>> ID: 3 from __bpf_prog_enter+34 to __brk_limit+477192251
> 
> -ENOCLUE
> 
>> ID: 4 from migrate_disable+60 to __bpf_prog_enter+9
>> ID: 5 from __bpf_prog_enter+4 to migrate_disable+0
> 
> I suppose we can reduce that to a single branch if we inline
> migrate_disable() here, that thing unfortunately needs one branch
> itself.

To inline migrate_disable, we may need expose this_rq() in include/, or 
use some other alternatives. I am planning to optimize that after this
set gets in.

Thanks,
Song

> 
>> ID: 6 from bpf_testmod_loop_test+20 to __bpf_prog_enter+0
> 
> And this is the first branch out of the test program, giving 7 entries
> now, of which we can remove at least 2 more with a bit of elbow greace,
> right?
> 
>> ID: 7 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
>> ID: 8 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
>> 
>> I will fold this in and send v7. 
> 
> Excellent.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 bpf-next 1/3] perf: enable branch record for software events
  2021-09-10 18:50           ` Peter Zijlstra
@ 2021-09-10 19:00             ` Song Liu
  2021-09-10 19:08               ` Peter Zijlstra
  0 siblings, 1 reply; 16+ messages in thread
From: Song Liu @ 2021-09-10 19:00 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: bpf, linux-kernel, acme, mingo, kjain, Kernel Team,
	John Fastabend, Andrii Nakryiko



> On Sep 10, 2021, at 11:50 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> 
> On Fri, Sep 10, 2021 at 08:40:27PM +0200, Peter Zijlstra wrote:
>> On Fri, Sep 10, 2021 at 06:27:36PM +0000, Song Liu wrote:
>> 
>>> This works great and saves 3 entries! We have the following now:
>> 
>> Yay!
>> 
>>> ID: 0 from bpf_get_branch_snapshot+18 to intel_pmu_snapshot_branch_stack+0
>> 
>> is unavoidable, we need to end up in intel_pmu_snapshot_branch_stack()
>> eventually.
>> 
>>> ID: 1 from __brk_limit+477143934 to bpf_get_branch_snapshot+0
>> 
>> could be elided by having the JIT emit the call to
>> intel_pmu_snapshot_branch_stack directly, instead of laundering it
>> through that helper I suppose.
>> 
>>> ID: 2 from __brk_limit+477192263 to __brk_limit+477143880  # trampoline 
>>> ID: 3 from __bpf_prog_enter+34 to __brk_limit+477192251
>> 
>> -ENOCLUE
>> 
>>> ID: 4 from migrate_disable+60 to __bpf_prog_enter+9
>>> ID: 5 from __bpf_prog_enter+4 to migrate_disable+0
>> 
>> I suppose we can reduce that to a single branch if we inline
>> migrate_disable() here, that thing unfortunately needs one branch
>> itself.
> 
> Oooh, since we put local_irq_save/restore() in
> intel_pmu_snapshot_branch_stack(), we no longer need to be after
> migrate_disable(). You could go back to placing it earlier!

Hmm.. not really. We call migrate_disable() before entering the BPF program. 
And the helper calls snapshot_branch_stack() inside the BPF program. To move
it to before migrate_disable(), we will have to add a "whether to snapshot
branch stack" check before entering the BPF program. This check, while is
cheap, is added to all BPF programs on this hook, even when the program does 
not use snapshot at all. So we would rather keep all logic inside the helper, 
and not touch the common path. 

Thanks,
Song






^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 bpf-next 1/3] perf: enable branch record for software events
  2021-09-10 19:00             ` Song Liu
@ 2021-09-10 19:08               ` Peter Zijlstra
  2021-09-10 19:11                 ` Song Liu
  0 siblings, 1 reply; 16+ messages in thread
From: Peter Zijlstra @ 2021-09-10 19:08 UTC (permalink / raw)
  To: Song Liu
  Cc: bpf, linux-kernel, acme, mingo, kjain, Kernel Team,
	John Fastabend, Andrii Nakryiko

On Fri, Sep 10, 2021 at 07:00:08PM +0000, Song Liu wrote:

> Hmm.. not really. We call migrate_disable() before entering the BPF program. 
> And the helper calls snapshot_branch_stack() inside the BPF program. To move
> it to before migrate_disable(), we will have to add a "whether to snapshot
> branch stack" check before entering the BPF program. This check, while is
> cheap, is added to all BPF programs on this hook, even when the program does 
> not use snapshot at all. So we would rather keep all logic inside the helper, 
> and not touch the common path. 

Moo :/ Because I also really don't want to expose struct rq, it's
currently nicely squirelled away in kernel/sched/ and doesn't get
anywhere near include/.

A well, maybe we can do something clever with migrate_disable() itself.
I'll put it on this endless todo list ;-)

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 bpf-next 1/3] perf: enable branch record for software events
  2021-09-10 19:08               ` Peter Zijlstra
@ 2021-09-10 19:11                 ` Song Liu
  0 siblings, 0 replies; 16+ messages in thread
From: Song Liu @ 2021-09-10 19:11 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: bpf, linux-kernel, acme, mingo, kjain, Kernel Team,
	John Fastabend, Andrii Nakryiko



> On Sep 10, 2021, at 12:08 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> 
> On Fri, Sep 10, 2021 at 07:00:08PM +0000, Song Liu wrote:
> 
>> Hmm.. not really. We call migrate_disable() before entering the BPF program. 
>> And the helper calls snapshot_branch_stack() inside the BPF program. To move
>> it to before migrate_disable(), we will have to add a "whether to snapshot
>> branch stack" check before entering the BPF program. This check, while is
>> cheap, is added to all BPF programs on this hook, even when the program does 
>> not use snapshot at all. So we would rather keep all logic inside the helper, 
>> and not touch the common path. 
> 
> Moo :/ Because I also really don't want to expose struct rq, it's
> currently nicely squirelled away in kernel/sched/ and doesn't get
> anywhere near include/.

This matches my guess, so I didn't go too far on that direction. :-)

> 
> A well, maybe we can do something clever with migrate_disable() itself.
> I'll put it on this endless todo list ;-)

Awesome!

Thanks,
Song

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2021-09-10 19:11 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-07 20:27 [PATCH v6 bpf-next 0/3] bpf: introduce bpf_get_branch_snapshot Song Liu
2021-09-07 20:28 ` [PATCH v6 bpf-next 1/3] perf: enable branch record for software events Song Liu
2021-09-10 10:40   ` Peter Zijlstra
2021-09-10 13:54     ` Peter Zijlstra
2021-09-10 18:27       ` Song Liu
2021-09-10 18:40         ` Peter Zijlstra
2021-09-10 18:50           ` Peter Zijlstra
2021-09-10 19:00             ` Song Liu
2021-09-10 19:08               ` Peter Zijlstra
2021-09-10 19:11                 ` Song Liu
2021-09-10 18:51           ` Song Liu
2021-09-07 20:28 ` [PATCH v6 bpf-next 2/3] bpf: introduce helper bpf_get_branch_snapshot Song Liu
2021-09-07 20:28 ` [PATCH v6 bpf-next 3/3] selftests/bpf: add test for bpf_get_branch_snapshot Song Liu
2021-09-07 20:29 ` [PATCH v6 bpf-next 0/3] bpf: introduce bpf_get_branch_snapshot Song Liu
2021-09-07 20:58   ` Andrii Nakryiko
2021-09-09 21:53   ` Song Liu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.