All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] Various perf patches
@ 2013-05-03 12:11 Peter Zijlstra
  2013-05-03 12:11 ` [PATCH 1/3] perf, x86: Blacklist all MEM_*_RETIRED events for IVB Peter Zijlstra
                   ` (2 more replies)
  0 siblings, 3 replies; 56+ messages in thread
From: Peter Zijlstra @ 2013-05-03 12:11 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, ak, eranian

Some perf patches fixing things reported by Andi.


^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 1/3] perf, x86: Blacklist all MEM_*_RETIRED events for IVB
  2013-05-03 12:11 [PATCH 0/3] Various perf patches Peter Zijlstra
@ 2013-05-03 12:11 ` Peter Zijlstra
  2013-05-03 14:35   ` Andi Kleen
  2013-05-04  8:20   ` [tip:perf/urgent] perf/x86: Blacklist all MEM_*_RETIRED events for Ivy Bridge tip-bot for Peter Zijlstra
  2013-05-03 12:11 ` [PATCH 2/3] perf, x86, lbr: Fix LBR filter Peter Zijlstra
  2013-05-03 12:11 ` [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL Peter Zijlstra
  2 siblings, 2 replies; 56+ messages in thread
From: Peter Zijlstra @ 2013-05-03 12:11 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, ak, eranian, Peter Zijlstra

[-- Attachment #1: peterz-perf-x86-ivb-mem-retired.patch --]
[-- Type: text/plain, Size: 1588 bytes --]

Errata BV98 states that all MEM_*_RETIRED events corrupt the counter
value of the SMT sibling's counters. Blacklist these events

Cc: eranian@google.com
Reported-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-jwra43mujrv1oq9xk6mfe57v@git.kernel.org
---
 arch/x86/kernel/cpu/perf_event_intel.c |   13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -128,10 +128,15 @@ static struct event_constraint intel_ivb
 	INTEL_UEVENT_CONSTRAINT(0x08a3, 0x4), /* CYCLE_ACTIVITY.CYCLES_L1D_PENDING */
 	INTEL_UEVENT_CONSTRAINT(0x0ca3, 0x4), /* CYCLE_ACTIVITY.STALLS_L1D_PENDING */
 	INTEL_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PREC_DIST */
-	INTEL_EVENT_CONSTRAINT(0xd0, 0xf), /* MEM_UOPS_RETIRED.* */
-	INTEL_EVENT_CONSTRAINT(0xd1, 0xf), /* MEM_LOAD_UOPS_RETIRED.* */
-	INTEL_EVENT_CONSTRAINT(0xd2, 0xf), /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.* */
-	INTEL_EVENT_CONSTRAINT(0xd3, 0xf), /*  MEM_LOAD_UOPS_LLC_MISS_RETIRED.* */
+	/*
+	 * Errata BV98 -- MEM_*_RETIRED events can leak between counters of SMT
+	 * siblings; disable these events because they can corrupt unrelated
+	 * counters.
+	 */
+	INTEL_EVENT_CONSTRAINT(0xd0, 0x0), /* MEM_UOPS_RETIRED.* */
+	INTEL_EVENT_CONSTRAINT(0xd1, 0x0), /* MEM_LOAD_UOPS_RETIRED.* */
+	INTEL_EVENT_CONSTRAINT(0xd2, 0x0), /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.* */
+	INTEL_EVENT_CONSTRAINT(0xd3, 0x0), /* MEM_LOAD_UOPS_LLC_MISS_RETIRED.* */
 	EVENT_CONSTRAINT_END
 };
 



^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 2/3] perf, x86, lbr: Fix LBR filter
  2013-05-03 12:11 [PATCH 0/3] Various perf patches Peter Zijlstra
  2013-05-03 12:11 ` [PATCH 1/3] perf, x86: Blacklist all MEM_*_RETIRED events for IVB Peter Zijlstra
@ 2013-05-03 12:11 ` Peter Zijlstra
  2013-05-03 14:34   ` Andi Kleen
  2013-05-04  8:21   ` [tip:perf/urgent] perf/x86/intel/lbr: " tip-bot for Peter Zijlstra
  2013-05-03 12:11 ` [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL Peter Zijlstra
  2 siblings, 2 replies; 56+ messages in thread
From: Peter Zijlstra @ 2013-05-03 12:11 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, ak, eranian, Peter Zijlstra

[-- Attachment #1: peterz-perf-x86-lbr-kernel-text.patch --]
[-- Type: text/plain, Size: 1253 bytes --]

The LBR 'from' adddress is under full userspace control; ensure we
validate it before reading from it.

Note: is_module_text_address() can potentially be quite expensive;
      for those running into that either stop using modules as
      all sane people do or optimize it using an RCU backed rb-tree.

Reported-by: Andi Kleen <ak@linux.intel.com>
Cc: eranian@google.com
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-mk8i82ffzax01cnqo829iy1q@git.kernel.org
---
 arch/x86/kernel/cpu/perf_event_intel_lbr.c |   14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

--- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
@@ -442,8 +442,18 @@ static int branch_type(unsigned long fro
 			return X86_BR_NONE;
 
 		addr = buf;
-	} else
-		addr = (void *)from;
+	} else {
+		/*
+		 * The LBR logs any address in the IP, even if the IP just
+		 * faulted. This means userspace can control the from address.
+		 * Ensure we don't blindy read any address by validating it is
+		 * a known text address.
+		 */
+		if (kernel_text_address(from))
+			addr = (void *)from;
+		else
+			return X86_BR_NONE;
+	}
 
 	/*
 	 * decoder needs to know the ABI especially



^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-03 12:11 [PATCH 0/3] Various perf patches Peter Zijlstra
  2013-05-03 12:11 ` [PATCH 1/3] perf, x86: Blacklist all MEM_*_RETIRED events for IVB Peter Zijlstra
  2013-05-03 12:11 ` [PATCH 2/3] perf, x86, lbr: Fix LBR filter Peter Zijlstra
@ 2013-05-03 12:11 ` Peter Zijlstra
  2013-05-03 14:41   ` Andi Kleen
                     ` (2 more replies)
  2 siblings, 3 replies; 56+ messages in thread
From: Peter Zijlstra @ 2013-05-03 12:11 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, ak, eranian, Peter Zijlstra

[-- Attachment #1: peterz-perf-x86-lbr-priv.patch --]
[-- Type: text/plain, Size: 831 bytes --]

We should always have proper privileges when requesting kernel data.

Cc: Andi Kleen <ak@linux.intel.com>
Cc: eranian@google.com
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-v0x9ky3ahzr6nm3c6ilwrili@git.kernel.org
---
 arch/x86/kernel/cpu/perf_event_intel_lbr.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

--- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
@@ -318,8 +318,11 @@ static void intel_pmu_setup_sw_lbr_filte
 	if (br_type & PERF_SAMPLE_BRANCH_USER)
 		mask |= X86_BR_USER;
 
-	if (br_type & PERF_SAMPLE_BRANCH_KERNEL)
+	if (br_type & PERF_SAMPLE_BRANCH_KERNEL) {
+		if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
+			return -EACCES;
 		mask |= X86_BR_KERNEL;
+	}
 
 	/* we ignore BRANCH_HV here */
 



^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 2/3] perf, x86, lbr: Fix LBR filter
  2013-05-03 12:11 ` [PATCH 2/3] perf, x86, lbr: Fix LBR filter Peter Zijlstra
@ 2013-05-03 14:34   ` Andi Kleen
  2013-05-04  6:34     ` Ingo Molnar
  2013-05-04  8:21   ` [tip:perf/urgent] perf/x86/intel/lbr: " tip-bot for Peter Zijlstra
  1 sibling, 1 reply; 56+ messages in thread
From: Andi Kleen @ 2013-05-03 14:34 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: mingo, linux-kernel, eranian

> +	} else {
> +		/*
> +		 * The LBR logs any address in the IP, even if the IP just
> +		 * faulted. This means userspace can control the from address.
> +		 * Ensure we don't blindy read any address by validating it is
> +		 * a known text address.
> +		 */
> +		if (kernel_text_address(from))

Sorry doing it this way is just incredible expensive and dumb.

After all the tool is called "perf" and not "non perf"

-Andi

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/3] perf, x86: Blacklist all MEM_*_RETIRED events for IVB
  2013-05-03 12:11 ` [PATCH 1/3] perf, x86: Blacklist all MEM_*_RETIRED events for IVB Peter Zijlstra
@ 2013-05-03 14:35   ` Andi Kleen
  2013-05-03 17:00     ` Peter Zijlstra
  2013-05-04  8:20   ` [tip:perf/urgent] perf/x86: Blacklist all MEM_*_RETIRED events for Ivy Bridge tip-bot for Peter Zijlstra
  1 sibling, 1 reply; 56+ messages in thread
From: Andi Kleen @ 2013-05-03 14:35 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: mingo, linux-kernel, eranian

On Fri, May 03, 2013 at 02:11:23PM +0200, Peter Zijlstra wrote:
> Errata BV98 states that all MEM_*_RETIRED events corrupt the counter
> value of the SMT sibling's counters. Blacklist these events

I disagree with this patch. This is just overkill and not needed
at all.

-Andi

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-03 12:11 ` [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL Peter Zijlstra
@ 2013-05-03 14:41   ` Andi Kleen
  2013-05-04  8:22   ` [tip:perf/urgent] perf/x86/intel/lbr: " tip-bot for Peter Zijlstra
  2013-05-15 13:37   ` [PATCH 3/3] perf, x86, lbr: " Stephane Eranian
  2 siblings, 0 replies; 56+ messages in thread
From: Andi Kleen @ 2013-05-03 14:41 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: mingo, linux-kernel, eranian

On Fri, May 03, 2013 at 02:11:25PM +0200, Peter Zijlstra wrote:
> We should always have proper privileges when requesting kernel data.
> 
> Cc: Andi Kleen <ak@linux.intel.com>

Acked-by: Andi Kleen <ak@linux.intel.com>

> --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> @@ -318,8 +318,11 @@ static void intel_pmu_setup_sw_lbr_filte
>  	if (br_type & PERF_SAMPLE_BRANCH_USER)
>  		mask |= X86_BR_USER;
>  
> -	if (br_type & PERF_SAMPLE_BRANCH_KERNEL)
> +	if (br_type & PERF_SAMPLE_BRANCH_KERNEL) {
> +		if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
> +			return -EACCES;
>  		mask |= X86_BR_KERNEL;
> +	}
>  
>  	/* we ignore BRANCH_HV here */
>  
> 
> 

-- 
ak@linux.intel.com -- Speaking for myself only

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/3] perf, x86: Blacklist all MEM_*_RETIRED events for IVB
  2013-05-03 14:35   ` Andi Kleen
@ 2013-05-03 17:00     ` Peter Zijlstra
  2013-05-15 14:20       ` Stephane Eranian
  0 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-05-03 17:00 UTC (permalink / raw)
  To: Andi Kleen; +Cc: Peter Zijlstra, mingo, linux-kernel, eranian

On Fri, May 03, 2013 at 07:35:07AM -0700, Andi Kleen wrote:
> On Fri, May 03, 2013 at 02:11:23PM +0200, Peter Zijlstra wrote:
> > Errata BV98 states that all MEM_*_RETIRED events corrupt the counter
> > value of the SMT sibling's counters. Blacklist these events
> 
> I disagree with this patch. This is just overkill and not needed
> at all.

Then give me a patch that both completely describes the problem and ensures
isolation so that no counters get corrupted (and isn't too invasive).

So far you've failed on both counts.



^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 2/3] perf, x86, lbr: Fix LBR filter
  2013-05-03 14:34   ` Andi Kleen
@ 2013-05-04  6:34     ` Ingo Molnar
  0 siblings, 0 replies; 56+ messages in thread
From: Ingo Molnar @ 2013-05-04  6:34 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Peter Zijlstra, linux-kernel, eranian, Arnaldo Carvalho de Melo


* Andi Kleen <ak@linux.intel.com> wrote:

> > +	} else {
> > +		/*
> > +		 * The LBR logs any address in the IP, even if the IP just
> > +		 * faulted. This means userspace can control the from address.
> > +		 * Ensure we don't blindy read any address by validating it is
> > +		 * a known text address.
> > +		 */
> > +		if (kernel_text_address(from))
> 
> Sorry doing it this way is just incredible expensive and dumb.

If anyone using this feature notices the __module_address() overhead then 
a 'module addresses RCU rbtree' could be added, which should solve the 
overhead impact.

In any case Peter's patch fixes the bug without regressing the feature as 
it is implemented today. Do you have a better solution that does not break 
the ABI? The solution you proposed before regresses existing 
functionality.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 56+ messages in thread

* [tip:perf/urgent] perf/x86: Blacklist all MEM_*_RETIRED events for Ivy Bridge
  2013-05-03 12:11 ` [PATCH 1/3] perf, x86: Blacklist all MEM_*_RETIRED events for IVB Peter Zijlstra
  2013-05-03 14:35   ` Andi Kleen
@ 2013-05-04  8:20   ` tip-bot for Peter Zijlstra
  1 sibling, 0 replies; 56+ messages in thread
From: tip-bot for Peter Zijlstra @ 2013-05-04  8:20 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, a.p.zijlstra, stable, ak, tglx

Commit-ID:  741a698f420c34c458294a6accecfbad702a7c52
Gitweb:     http://git.kernel.org/tip/741a698f420c34c458294a6accecfbad702a7c52
Author:     Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Fri, 3 May 2013 14:11:23 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sat, 4 May 2013 08:37:46 +0200

perf/x86: Blacklist all MEM_*_RETIRED events for Ivy Bridge

Errata BV98 states that all MEM_*_RETIRED events corrupt the
counter value of the SMT sibling's counters. Blacklist these
events

Reported-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: <stable@kernel.org>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/20130503121256.083340271@chello.nl
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: http://lkml.kernel.org/n/tip-jwra43mujrv1oq9xk6mfe57v@git.kernel.org
---
 arch/x86/kernel/cpu/perf_event_intel.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index cc45deb..4a0a462 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -125,10 +125,15 @@ static struct event_constraint intel_ivb_event_constraints[] __read_mostly =
 	INTEL_UEVENT_CONSTRAINT(0x08a3, 0x4), /* CYCLE_ACTIVITY.CYCLES_L1D_PENDING */
 	INTEL_UEVENT_CONSTRAINT(0x0ca3, 0x4), /* CYCLE_ACTIVITY.STALLS_L1D_PENDING */
 	INTEL_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PREC_DIST */
-	INTEL_EVENT_CONSTRAINT(0xd0, 0xf), /* MEM_UOPS_RETIRED.* */
-	INTEL_EVENT_CONSTRAINT(0xd1, 0xf), /* MEM_LOAD_UOPS_RETIRED.* */
-	INTEL_EVENT_CONSTRAINT(0xd2, 0xf), /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.* */
-	INTEL_EVENT_CONSTRAINT(0xd3, 0xf), /*  MEM_LOAD_UOPS_LLC_MISS_RETIRED.* */
+	/*
+	 * Errata BV98 -- MEM_*_RETIRED events can leak between counters of SMT
+	 * siblings; disable these events because they can corrupt unrelated
+	 * counters.
+	 */
+	INTEL_EVENT_CONSTRAINT(0xd0, 0x0), /* MEM_UOPS_RETIRED.* */
+	INTEL_EVENT_CONSTRAINT(0xd1, 0x0), /* MEM_LOAD_UOPS_RETIRED.* */
+	INTEL_EVENT_CONSTRAINT(0xd2, 0x0), /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.* */
+	INTEL_EVENT_CONSTRAINT(0xd3, 0x0), /* MEM_LOAD_UOPS_LLC_MISS_RETIRED.* */
 	EVENT_CONSTRAINT_END
 };
 

^ permalink raw reply	[flat|nested] 56+ messages in thread

* [tip:perf/urgent] perf/x86/intel/lbr: Fix LBR filter
  2013-05-03 12:11 ` [PATCH 2/3] perf, x86, lbr: Fix LBR filter Peter Zijlstra
  2013-05-03 14:34   ` Andi Kleen
@ 2013-05-04  8:21   ` tip-bot for Peter Zijlstra
  1 sibling, 0 replies; 56+ messages in thread
From: tip-bot for Peter Zijlstra @ 2013-05-04  8:21 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, a.p.zijlstra, stable, ak, tglx

Commit-ID:  6e15eb3ba6c0249c9e8c783517d131b47db995ca
Gitweb:     http://git.kernel.org/tip/6e15eb3ba6c0249c9e8c783517d131b47db995ca
Author:     Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Fri, 3 May 2013 14:11:24 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sat, 4 May 2013 08:37:47 +0200

perf/x86/intel/lbr: Fix LBR filter

The LBR 'from' adddress is under full userspace control; ensure
we validate it before reading from it.

Note: is_module_text_address() can potentially be quite
expensive; for those running into that with high overhead
in modules optimize it using an RCU backed rb-tree.

Reported-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: <stable@kernel.org>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/20130503121256.158211806@chello.nl
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: http://lkml.kernel.org/n/tip-mk8i82ffzax01cnqo829iy1q@git.kernel.org
---
 arch/x86/kernel/cpu/perf_event_intel_lbr.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel_lbr.c b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
index da02e9c..de341d4 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
@@ -442,8 +442,18 @@ static int branch_type(unsigned long from, unsigned long to)
 			return X86_BR_NONE;
 
 		addr = buf;
-	} else
-		addr = (void *)from;
+	} else {
+		/*
+		 * The LBR logs any address in the IP, even if the IP just
+		 * faulted. This means userspace can control the from address.
+		 * Ensure we don't blindy read any address by validating it is
+		 * a known text address.
+		 */
+		if (kernel_text_address(from))
+			addr = (void *)from;
+		else
+			return X86_BR_NONE;
+	}
 
 	/*
 	 * decoder needs to know the ABI especially

^ permalink raw reply	[flat|nested] 56+ messages in thread

* [tip:perf/urgent] perf/x86/intel/lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-03 12:11 ` [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL Peter Zijlstra
  2013-05-03 14:41   ` Andi Kleen
@ 2013-05-04  8:22   ` tip-bot for Peter Zijlstra
  2013-05-04 11:19     ` Borislav Petkov
  2013-05-15 13:37   ` [PATCH 3/3] perf, x86, lbr: " Stephane Eranian
  2 siblings, 1 reply; 56+ messages in thread
From: tip-bot for Peter Zijlstra @ 2013-05-04  8:22 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, a.p.zijlstra, stable, ak, tglx

Commit-ID:  0f5c78b5f33ce940034743e5f9485fc81ad75b0f
Gitweb:     http://git.kernel.org/tip/0f5c78b5f33ce940034743e5f9485fc81ad75b0f
Author:     Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Fri, 3 May 2013 14:11:25 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sat, 4 May 2013 08:37:48 +0200

perf/x86/intel/lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL

We should always have proper privileges when requesting kernel
data.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: <stable@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/20130503121256.230745028@chello.nl
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: http://lkml.kernel.org/n/tip-v0x9ky3ahzr6nm3c6ilwrili@git.kernel.org
---
 arch/x86/kernel/cpu/perf_event_intel_lbr.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel_lbr.c b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
index de341d4..0e92871 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
@@ -318,8 +318,11 @@ static void intel_pmu_setup_sw_lbr_filter(struct perf_event *event)
 	if (br_type & PERF_SAMPLE_BRANCH_USER)
 		mask |= X86_BR_USER;
 
-	if (br_type & PERF_SAMPLE_BRANCH_KERNEL)
+	if (br_type & PERF_SAMPLE_BRANCH_KERNEL) {
+		if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
+			return -EACCES;
 		mask |= X86_BR_KERNEL;
+	}
 
 	/* we ignore BRANCH_HV here */
 

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [tip:perf/urgent] perf/x86/intel/lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-04  8:22   ` [tip:perf/urgent] perf/x86/intel/lbr: " tip-bot for Peter Zijlstra
@ 2013-05-04 11:19     ` Borislav Petkov
  2013-05-05  9:05       ` Ingo Molnar
  2013-05-06  8:07       ` Peter Zijlstra
  0 siblings, 2 replies; 56+ messages in thread
From: Borislav Petkov @ 2013-05-04 11:19 UTC (permalink / raw)
  To: mingo, a.p.zijlstra
  Cc: hpa, linux-kernel, stable, ak, tglx, linux-tip-commits

On Sat, May 04, 2013 at 01:22:57AM -0700, tip-bot for Peter Zijlstra wrote:
> Commit-ID:  0f5c78b5f33ce940034743e5f9485fc81ad75b0f
> Gitweb:     http://git.kernel.org/tip/0f5c78b5f33ce940034743e5f9485fc81ad75b0f
> Author:     Peter Zijlstra <a.p.zijlstra@chello.nl>
> AuthorDate: Fri, 3 May 2013 14:11:25 +0200
> Committer:  Ingo Molnar <mingo@kernel.org>
> CommitDate: Sat, 4 May 2013 08:37:48 +0200
> 
> perf/x86/intel/lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
> 
> We should always have proper privileges when requesting kernel
> data.
> 
> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Cc: <stable@kernel.org>
> Cc: Andi Kleen <ak@linux.intel.com>
> Cc: eranian@google.com
> Link: http://lkml.kernel.org/r/20130503121256.230745028@chello.nl
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> Link: http://lkml.kernel.org/n/tip-v0x9ky3ahzr6nm3c6ilwrili@git.kernel.org
> ---
>  arch/x86/kernel/cpu/perf_event_intel_lbr.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kernel/cpu/perf_event_intel_lbr.c b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> index de341d4..0e92871 100644
> --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> @@ -318,8 +318,11 @@ static void intel_pmu_setup_sw_lbr_filter(struct perf_event *event)
>  	if (br_type & PERF_SAMPLE_BRANCH_USER)
>  		mask |= X86_BR_USER;
>  
> -	if (br_type & PERF_SAMPLE_BRANCH_KERNEL)
> +	if (br_type & PERF_SAMPLE_BRANCH_KERNEL) {
> +		if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
> +			return -EACCES;

It is probably not too late to amend this patch and remove the "-EACCES":

arch/x86/kernel/cpu/perf_event_intel_lbr.c: In function ‘intel_pmu_setup_sw_lbr_filter’:
arch/x86/kernel/cpu/perf_event_intel_lbr.c:323:4: warning: ‘return’ with a value, in function returning void [enabled by default]

-- 
Regards/Gruss,
    Boris.

Sent from a fat crate under my desk. Formatting is fine.
--

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [tip:perf/urgent] perf/x86/intel/lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-04 11:19     ` Borislav Petkov
@ 2013-05-05  9:05       ` Ingo Molnar
  2013-05-06  8:07       ` Peter Zijlstra
  1 sibling, 0 replies; 56+ messages in thread
From: Ingo Molnar @ 2013-05-05  9:05 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: a.p.zijlstra, hpa, linux-kernel, stable, ak, tglx, linux-tip-commits


* Borislav Petkov <bp@alien8.de> wrote:

> On Sat, May 04, 2013 at 01:22:57AM -0700, tip-bot for Peter Zijlstra wrote:
> > Commit-ID:  0f5c78b5f33ce940034743e5f9485fc81ad75b0f
> > Gitweb:     http://git.kernel.org/tip/0f5c78b5f33ce940034743e5f9485fc81ad75b0f
> > Author:     Peter Zijlstra <a.p.zijlstra@chello.nl>
> > AuthorDate: Fri, 3 May 2013 14:11:25 +0200
> > Committer:  Ingo Molnar <mingo@kernel.org>
> > CommitDate: Sat, 4 May 2013 08:37:48 +0200
> > 
> > perf/x86/intel/lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
> > 
> > We should always have proper privileges when requesting kernel
> > data.
> > 
> > Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> > Cc: <stable@kernel.org>
> > Cc: Andi Kleen <ak@linux.intel.com>
> > Cc: eranian@google.com
> > Link: http://lkml.kernel.org/r/20130503121256.230745028@chello.nl
> > Signed-off-by: Ingo Molnar <mingo@kernel.org>
> > Link: http://lkml.kernel.org/n/tip-v0x9ky3ahzr6nm3c6ilwrili@git.kernel.org
> > ---
> >  arch/x86/kernel/cpu/perf_event_intel_lbr.c | 5 ++++-
> >  1 file changed, 4 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/x86/kernel/cpu/perf_event_intel_lbr.c b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> > index de341d4..0e92871 100644
> > --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> > +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> > @@ -318,8 +318,11 @@ static void intel_pmu_setup_sw_lbr_filter(struct perf_event *event)
> >  	if (br_type & PERF_SAMPLE_BRANCH_USER)
> >  		mask |= X86_BR_USER;
> >  
> > -	if (br_type & PERF_SAMPLE_BRANCH_KERNEL)
> > +	if (br_type & PERF_SAMPLE_BRANCH_KERNEL) {
> > +		if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
> > +			return -EACCES;
> 
> It is probably not too late to amend this patch and remove the "-EACCES":
> 
> arch/x86/kernel/cpu/perf_event_intel_lbr.c: In function ???intel_pmu_setup_sw_lbr_filter???:
> arch/x86/kernel/cpu/perf_event_intel_lbr.c:323:4: warning: ???return??? with a value, in function returning void [enabled by default]

Well, the better fix is to propagate that error condition down instead of 
ignoring it.

Since it was the head commit for a tree others don't typically pull and 
base development work off I amended it.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [tip:perf/urgent] perf/x86/intel/lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-04 11:19     ` Borislav Petkov
  2013-05-05  9:05       ` Ingo Molnar
@ 2013-05-06  8:07       ` Peter Zijlstra
  2013-05-06  9:42         ` Ingo Molnar
  1 sibling, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-05-06  8:07 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: mingo, a.p.zijlstra, hpa, linux-kernel, stable, ak, tglx,
	linux-tip-commits

> > @@ -318,8 +318,11 @@ static void intel_pmu_setup_sw_lbr_filter(struct perf_event *event)
> >  	if (br_type & PERF_SAMPLE_BRANCH_USER)
> >  		mask |= X86_BR_USER;
> >  
> > -	if (br_type & PERF_SAMPLE_BRANCH_KERNEL)
> > +	if (br_type & PERF_SAMPLE_BRANCH_KERNEL) {
> > +		if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
> > +			return -EACCES;
> 
> It is probably not too late to amend this patch and remove the "-EACCES":
> 
> arch/x86/kernel/cpu/perf_event_intel_lbr.c: In function ‘intel_pmu_setup_sw_lbr_filter’:
> arch/x86/kernel/cpu/perf_event_intel_lbr.c:323:4: warning: ‘return’ with a value, in function returning void [enabled by default]

Oh urgh, looks like I forgot a refresh before posting..

This one actually compiles a defconfig bzImage.

---
Subject: perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
From: Peter Zijlstra <a.p.zijlstra@chello.nl>
Date: Fri May 03 14:07:49 CEST 2013

We should always have proper privileges when requesting kernel data.

Cc: Andi Kleen <ak@linux.intel.com>
Cc: eranian@google.com
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-deb8yrh5fq2bijn5tlmezkmd@git.kernel.org
---
 arch/x86/kernel/cpu/perf_event_intel_lbr.c |   15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

--- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
@@ -310,7 +310,7 @@ void intel_pmu_lbr_read(void)
  * - in case there is no HW filter
  * - in case the HW filter has errata or limitations
  */
-static void intel_pmu_setup_sw_lbr_filter(struct perf_event *event)
+static int intel_pmu_setup_sw_lbr_filter(struct perf_event *event)
 {
 	u64 br_type = event->attr.branch_sample_type;
 	int mask = 0;
@@ -318,8 +318,11 @@ static void intel_pmu_setup_sw_lbr_filte
 	if (br_type & PERF_SAMPLE_BRANCH_USER)
 		mask |= X86_BR_USER;
 
-	if (br_type & PERF_SAMPLE_BRANCH_KERNEL)
+	if (br_type & PERF_SAMPLE_BRANCH_KERNEL) {
+		if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
+			return -EACCES;
 		mask |= X86_BR_KERNEL;
+	}
 
 	/* we ignore BRANCH_HV here */
 
@@ -339,6 +342,8 @@ static void intel_pmu_setup_sw_lbr_filte
 	 * be used by fixup code for some CPU
 	 */
 	event->hw.branch_reg.reg = mask;
+
+	return 0;
 }
 
 /*
@@ -375,7 +380,7 @@ static int intel_pmu_setup_hw_lbr_filter
 
 int intel_pmu_setup_lbr_filter(struct perf_event *event)
 {
-	int ret = 0;
+	int ret;
 
 	/*
 	 * no LBR on this PMU
@@ -386,7 +391,9 @@ int intel_pmu_setup_lbr_filter(struct pe
 	/*
 	 * setup SW LBR filter
 	 */
-	intel_pmu_setup_sw_lbr_filter(event);
+	ret = intel_pmu_setup_sw_lbr_filter(event);
+	if (ret)
+		return ret;
 
 	/*
 	 * setup HW LBR filter, if any



^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [tip:perf/urgent] perf/x86/intel/lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-06  8:07       ` Peter Zijlstra
@ 2013-05-06  9:42         ` Ingo Molnar
  0 siblings, 0 replies; 56+ messages in thread
From: Ingo Molnar @ 2013-05-06  9:42 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Borislav Petkov, a.p.zijlstra, hpa, linux-kernel, stable, ak,
	tglx, linux-tip-commits


* Peter Zijlstra <peterz@infradead.org> wrote:

> > > @@ -318,8 +318,11 @@ static void intel_pmu_setup_sw_lbr_filter(struct perf_event *event)
> > >  	if (br_type & PERF_SAMPLE_BRANCH_USER)
> > >  		mask |= X86_BR_USER;
> > >  
> > > -	if (br_type & PERF_SAMPLE_BRANCH_KERNEL)
> > > +	if (br_type & PERF_SAMPLE_BRANCH_KERNEL) {
> > > +		if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
> > > +			return -EACCES;
> > 
> > It is probably not too late to amend this patch and remove the "-EACCES":
> > 
> > arch/x86/kernel/cpu/perf_event_intel_lbr.c: In function ???intel_pmu_setup_sw_lbr_filter???:
> > arch/x86/kernel/cpu/perf_event_intel_lbr.c:323:4: warning: ???return??? with a value, in function returning void [enabled by default]
> 
> Oh urgh, looks like I forgot a refresh before posting..
> 
> This one actually compiles a defconfig bzImage.
> 
> ---
> Subject: perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
> From: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Date: Fri May 03 14:07:49 CEST 2013
> 
> We should always have proper privileges when requesting kernel data.
> 
> Cc: Andi Kleen <ak@linux.intel.com>
> Cc: eranian@google.com
> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Link: http://lkml.kernel.org/n/tip-deb8yrh5fq2bijn5tlmezkmd@git.kernel.org
> ---
>  arch/x86/kernel/cpu/perf_event_intel_lbr.c |   15 +++++++++++----
>  1 file changed, 11 insertions(+), 4 deletions(-)
> 
> --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> @@ -310,7 +310,7 @@ void intel_pmu_lbr_read(void)
>   * - in case there is no HW filter
>   * - in case the HW filter has errata or limitations
>   */
> -static void intel_pmu_setup_sw_lbr_filter(struct perf_event *event)
> +static int intel_pmu_setup_sw_lbr_filter(struct perf_event *event)
>  {
>  	u64 br_type = event->attr.branch_sample_type;
>  	int mask = 0;
> @@ -318,8 +318,11 @@ static void intel_pmu_setup_sw_lbr_filte
>  	if (br_type & PERF_SAMPLE_BRANCH_USER)
>  		mask |= X86_BR_USER;
>  
> -	if (br_type & PERF_SAMPLE_BRANCH_KERNEL)
> +	if (br_type & PERF_SAMPLE_BRANCH_KERNEL) {
> +		if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
> +			return -EACCES;
>  		mask |= X86_BR_KERNEL;
> +	}
>  
>  	/* we ignore BRANCH_HV here */
>  
> @@ -339,6 +342,8 @@ static void intel_pmu_setup_sw_lbr_filte
>  	 * be used by fixup code for some CPU
>  	 */
>  	event->hw.branch_reg.reg = mask;
> +
> +	return 0;
>  }
>  
>  /*
> @@ -375,7 +380,7 @@ static int intel_pmu_setup_hw_lbr_filter
>  
>  int intel_pmu_setup_lbr_filter(struct perf_event *event)
>  {
> -	int ret = 0;
> +	int ret;
>  
>  	/*
>  	 * no LBR on this PMU
> @@ -386,7 +391,9 @@ int intel_pmu_setup_lbr_filter(struct pe
>  	/*
>  	 * setup SW LBR filter
>  	 */
> -	intel_pmu_setup_sw_lbr_filter(event);
> +	ret = intel_pmu_setup_sw_lbr_filter(event);
> +	if (ret)
> +		return ret;
>  
>  	/*
>  	 * setup HW LBR filter, if any

That looks pretty close to what I did as well.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-03 12:11 ` [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL Peter Zijlstra
  2013-05-03 14:41   ` Andi Kleen
  2013-05-04  8:22   ` [tip:perf/urgent] perf/x86/intel/lbr: " tip-bot for Peter Zijlstra
@ 2013-05-15 13:37   ` Stephane Eranian
  2013-05-15 14:30     ` Peter Zijlstra
  2013-05-16  9:09     ` Peter Zijlstra
  2 siblings, 2 replies; 56+ messages in thread
From: Stephane Eranian @ 2013-05-15 13:37 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Ingo Molnar, LKML, ak

On Fri, May 3, 2013 at 2:11 PM, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
> We should always have proper privileges when requesting kernel data.
>
> Cc: Andi Kleen <ak@linux.intel.com>
> Cc: eranian@google.com
> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Link: http://lkml.kernel.org/n/tip-v0x9ky3ahzr6nm3c6ilwrili@git.kernel.org
> ---
>  arch/x86/kernel/cpu/perf_event_intel_lbr.c |    5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
>
> --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> @@ -318,8 +318,11 @@ static void intel_pmu_setup_sw_lbr_filte
>         if (br_type & PERF_SAMPLE_BRANCH_USER)
>                 mask |= X86_BR_USER;
>
> -       if (br_type & PERF_SAMPLE_BRANCH_KERNEL)
> +       if (br_type & PERF_SAMPLE_BRANCH_KERNEL) {
> +               if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
> +                       return -EACCES;
>                 mask |= X86_BR_KERNEL;
> +       }
>
This will prevent regular users from capturing kernel -> kernel branches.
But it won't prevent users from getting kernel -> user branches. Thus
some kernel address will still be captured. I guess they could be eliminated
by the sw_filter.

When using LBR priv level filtering, the filter applies to the branch target
only.

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/3] perf, x86: Blacklist all MEM_*_RETIRED events for IVB
  2013-05-03 17:00     ` Peter Zijlstra
@ 2013-05-15 14:20       ` Stephane Eranian
  2013-05-15 16:51         ` Peter Zijlstra
  0 siblings, 1 reply; 56+ messages in thread
From: Stephane Eranian @ 2013-05-15 14:20 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Andi Kleen, Peter Zijlstra, Ingo Molnar, LKML

Hi,

Based on our testing, it appears the corruption occurs only
when the MEM_* events are used and only on the sibling
counter. In other words, if HT0 has MEM_* in cntr0, then
HT1 cntr0 cannot be used, otherwise whatever is there may
get corrupted. So I think we could enhance Andi's initial patch
to handle this case instead of blacklist those events. They are
very important events.


On Fri, May 3, 2013 at 7:00 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Fri, May 03, 2013 at 07:35:07AM -0700, Andi Kleen wrote:
>> On Fri, May 03, 2013 at 02:11:23PM +0200, Peter Zijlstra wrote:
>> > Errata BV98 states that all MEM_*_RETIRED events corrupt the counter
>> > value of the SMT sibling's counters. Blacklist these events
>>
>> I disagree with this patch. This is just overkill and not needed
>> at all.
>
> Then give me a patch that both completely describes the problem and ensures
> isolation so that no counters get corrupted (and isn't too invasive).
>
> So far you've failed on both counts.
>
>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-15 13:37   ` [PATCH 3/3] perf, x86, lbr: " Stephane Eranian
@ 2013-05-15 14:30     ` Peter Zijlstra
  2013-05-16  9:09     ` Peter Zijlstra
  1 sibling, 0 replies; 56+ messages in thread
From: Peter Zijlstra @ 2013-05-15 14:30 UTC (permalink / raw)
  To: Stephane Eranian; +Cc: Ingo Molnar, LKML, ak

On Wed, May 15, 2013 at 03:37:22PM +0200, Stephane Eranian wrote:
> On Fri, May 3, 2013 at 2:11 PM, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
> > We should always have proper privileges when requesting kernel data.
> >
> > Cc: Andi Kleen <ak@linux.intel.com>
> > Cc: eranian@google.com
> > Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> > Link: http://lkml.kernel.org/n/tip-v0x9ky3ahzr6nm3c6ilwrili@git.kernel.org
> > ---
> >  arch/x86/kernel/cpu/perf_event_intel_lbr.c |    5 ++++-
> >  1 file changed, 4 insertions(+), 1 deletion(-)
> >
> > --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> > +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> > @@ -318,8 +318,11 @@ static void intel_pmu_setup_sw_lbr_filte
> >         if (br_type & PERF_SAMPLE_BRANCH_USER)
> >                 mask |= X86_BR_USER;
> >
> > -       if (br_type & PERF_SAMPLE_BRANCH_KERNEL)
> > +       if (br_type & PERF_SAMPLE_BRANCH_KERNEL) {
> > +               if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
> > +                       return -EACCES;
> >                 mask |= X86_BR_KERNEL;
> > +       }
> >
> This will prevent regular users from capturing kernel -> kernel branches.
> But it won't prevent users from getting kernel -> user branches. Thus
> some kernel address will still be captured. I guess they could be eliminated
> by the sw_filter.
> 
> When using LBR priv level filtering, the filter applies to the branch target
> only.

Ah, indeed. I'll try and whip up a patch.

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/3] perf, x86: Blacklist all MEM_*_RETIRED events for IVB
  2013-05-15 14:20       ` Stephane Eranian
@ 2013-05-15 16:51         ` Peter Zijlstra
  2013-05-16 15:42           ` Stephane Eranian
  0 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-05-15 16:51 UTC (permalink / raw)
  To: Stephane Eranian; +Cc: Andi Kleen, Ingo Molnar, LKML

On Wed, May 15, 2013 at 04:20:46PM +0200, Stephane Eranian wrote:
> Hi,
> 
> Based on our testing, it appears the corruption occurs only
> when the MEM_* events are used and only on the sibling
> counter. In other words, if HT0 has MEM_* in cntr0, then
> HT1 cntr0 cannot be used, otherwise whatever is there may
> get corrupted.

Ah, great. That is the least horrid case :-)

> So I think we could enhance Andi's initial patch
> to handle this case instead of blacklist those events. They are
> very important events.

Will you take a stab at it? I suppose you'll have to make each counter
have a shared resource and have the mem_*_retired events mark the
resource taken.

Also, did you test what happens when you turn SMT off in the BIOS so you
get double the amount of counters; do we then leak into CNTn+4 or is all
well again?



^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-15 13:37   ` [PATCH 3/3] perf, x86, lbr: " Stephane Eranian
  2013-05-15 14:30     ` Peter Zijlstra
@ 2013-05-16  9:09     ` Peter Zijlstra
  2013-05-16  9:17       ` Peter Zijlstra
                         ` (2 more replies)
  1 sibling, 3 replies; 56+ messages in thread
From: Peter Zijlstra @ 2013-05-16  9:09 UTC (permalink / raw)
  To: Stephane Eranian; +Cc: Ingo Molnar, LKML, ak, Michael Neuling

On Wed, May 15, 2013 at 03:37:22PM +0200, Stephane Eranian wrote:
> On Fri, May 3, 2013 at 2:11 PM, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
> > We should always have proper privileges when requesting kernel data.
> >
> > Cc: Andi Kleen <ak@linux.intel.com>
> > Cc: eranian@google.com
> > Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> > Link: http://lkml.kernel.org/n/tip-v0x9ky3ahzr6nm3c6ilwrili@git.kernel.org
> > ---
> >  arch/x86/kernel/cpu/perf_event_intel_lbr.c |    5 ++++-
> >  1 file changed, 4 insertions(+), 1 deletion(-)
> >
> > --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> > +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> > @@ -318,8 +318,11 @@ static void intel_pmu_setup_sw_lbr_filte
> >         if (br_type & PERF_SAMPLE_BRANCH_USER)
> >                 mask |= X86_BR_USER;
> >
> > -       if (br_type & PERF_SAMPLE_BRANCH_KERNEL)
> > +       if (br_type & PERF_SAMPLE_BRANCH_KERNEL) {
> > +               if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
> > +                       return -EACCES;
> >                 mask |= X86_BR_KERNEL;
> > +       }
> >
> This will prevent regular users from capturing kernel -> kernel branches.
> But it won't prevent users from getting kernel -> user branches. Thus
> some kernel address will still be captured. I guess they could be eliminated
> by the sw_filter.
> 
> When using LBR priv level filtering, the filter applies to the branch target
> only.

How about something like the below? It also adds the branch flags Mikey wanted
for PowerPC.

---
 arch/x86/kernel/cpu/perf_event_intel_lbr.c | 12 +++++++++---
 include/linux/perf_event.h                 | 10 +++++++---
 2 files changed, 16 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel_lbr.c b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
index d978353..f44d635 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
@@ -585,17 +585,23 @@ intel_pmu_lbr_filter(struct cpu_hw_events *cpuc)
 
 		/* if type does not correspond, then discard */
 		if (type == X86_BR_NONE || (br_sel & type) != type) {
-			cpuc->lbr_entries[i].from = 0;
+			cpuc->lbr_entries[i].__delete = 1;
 			compress = true;
 		}
+
+		/* hide kernel addresses if we're not privileged  */
+		if (!(br_sel & X86_BR_KERNEL) && kernel_ip(from)) {
+			cpuc->lbr_entries[i].from = -1L;
+			cpuc->lbr_entries[i].invalid_from = 1;
+		}
 	}
 
 	if (!compress)
 		return;
 
-	/* remove all entries with from=0 */
+	/* remove all entries with __delete */
 	for (i = 0; i < cpuc->lbr_stack.nr; ) {
-		if (!cpuc->lbr_entries[i].from) {
+		if (cpuc->lbr_entries[i].__delete) {
 			j = i;
 			while (++j < cpuc->lbr_stack.nr)
 				cpuc->lbr_entries[j-1] = cpuc->lbr_entries[j];
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index f463a46..7acf1c9 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -77,9 +77,13 @@ struct perf_raw_record {
 struct perf_branch_entry {
 	__u64	from;
 	__u64	to;
-	__u64	mispred:1,  /* target mispredicted */
-		predicted:1,/* target predicted */
-		reserved:62;
+	__u64	mispred:1,	/* target mispredicted		*/
+		predicted:1,	/* target predicted		*/
+		invalid_to:1,	/* @to isn't to be trusted	*/
+		invalid_from:1, /* @from isn't to be trusted	*/
+		reserved:59,
+		__delete:1;	/* Implementation; userspace should
+				   always see a 0 		*/
 };
 
 /*


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-16  9:09     ` Peter Zijlstra
@ 2013-05-16  9:17       ` Peter Zijlstra
  2013-05-16 10:09         ` Michael Neuling
  2013-05-16 10:15         ` Michael Neuling
  2 siblings, 0 replies; 56+ messages in thread
From: Peter Zijlstra @ 2013-05-16  9:17 UTC (permalink / raw)
  To: Stephane Eranian; +Cc: Ingo Molnar, LKML, ak, Michael Neuling

On Thu, May 16, 2013 at 11:09:16AM +0200, Peter Zijlstra wrote:
> How about something like the below? It also adds the branch flags Mikey wanted
> for PowerPC.

The asymmetry is unfortunate, but I think its more useful to have the from
kernel branch target than it is not to have it. This way you at least know
there was a kernel entry/exit and where you've continued.

Without either branch to kernel and branch from kernel entries you'd be
wondering WTF happend to your control flow.

> ---
>  arch/x86/kernel/cpu/perf_event_intel_lbr.c | 12 +++++++++---
>  include/linux/perf_event.h                 | 10 +++++++---
>  2 files changed, 16 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/perf_event_intel_lbr.c b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> index d978353..f44d635 100644
> --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> @@ -585,17 +585,23 @@ intel_pmu_lbr_filter(struct cpu_hw_events *cpuc)
>  
>  		/* if type does not correspond, then discard */
>  		if (type == X86_BR_NONE || (br_sel & type) != type) {
> -			cpuc->lbr_entries[i].from = 0;
> +			cpuc->lbr_entries[i].__delete = 1;
>  			compress = true;
>  		}
> +
> +		/* hide kernel addresses if we're not privileged  */
> +		if (!(br_sel & X86_BR_KERNEL) && kernel_ip(from)) {
> +			cpuc->lbr_entries[i].from = -1L;
> +			cpuc->lbr_entries[i].invalid_from = 1;
> +		}
>  	}
>  
>  	if (!compress)
>  		return;
>  
> -	/* remove all entries with from=0 */
> +	/* remove all entries with __delete */
>  	for (i = 0; i < cpuc->lbr_stack.nr; ) {
> -		if (!cpuc->lbr_entries[i].from) {
> +		if (cpuc->lbr_entries[i].__delete) {
>  			j = i;
>  			while (++j < cpuc->lbr_stack.nr)
>  				cpuc->lbr_entries[j-1] = cpuc->lbr_entries[j];
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index f463a46..7acf1c9 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -77,9 +77,13 @@ struct perf_raw_record {
>  struct perf_branch_entry {
>  	__u64	from;
>  	__u64	to;
> -	__u64	mispred:1,  /* target mispredicted */
> -		predicted:1,/* target predicted */
> -		reserved:62;
> +	__u64	mispred:1,	/* target mispredicted		*/
> +		predicted:1,	/* target predicted		*/
> +		invalid_to:1,	/* @to isn't to be trusted	*/
> +		invalid_from:1, /* @from isn't to be trusted	*/
> +		reserved:59,
> +		__delete:1;	/* Implementation; userspace should
> +				   always see a 0 		*/
>  };
>  
>  /*
> 

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-16  9:09     ` Peter Zijlstra
@ 2013-05-16 10:09         ` Michael Neuling
  2013-05-16 10:09         ` Michael Neuling
  2013-05-16 10:15         ` Michael Neuling
  2 siblings, 0 replies; 56+ messages in thread
From: Michael Neuling @ 2013-05-16 10:09 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Stephane Eranian, Ingo Molnar, LKML, ak, Linux PPC dev, michael, benh

Peter Zijlstra <peterz@infradead.org> wrote:

> On Wed, May 15, 2013 at 03:37:22PM +0200, Stephane Eranian wrote:
> > On Fri, May 3, 2013 at 2:11 PM, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
> > > We should always have proper privileges when requesting kernel data.
> > >
> > > Cc: Andi Kleen <ak@linux.intel.com>
> > > Cc: eranian@google.com
> > > Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> > > Link: http://lkml.kernel.org/n/tip-v0x9ky3ahzr6nm3c6ilwrili@git.kernel.org
> > > ---
> > >  arch/x86/kernel/cpu/perf_event_intel_lbr.c |    5 ++++-
> > >  1 file changed, 4 insertions(+), 1 deletion(-)
> > >
> > > --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> > > +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> > > @@ -318,8 +318,11 @@ static void intel_pmu_setup_sw_lbr_filte
> > >         if (br_type & PERF_SAMPLE_BRANCH_USER)
> > >                 mask |= X86_BR_USER;
> > >
> > > -       if (br_type & PERF_SAMPLE_BRANCH_KERNEL)
> > > +       if (br_type & PERF_SAMPLE_BRANCH_KERNEL) {
> > > +               if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
> > > +                       return -EACCES;
> > >                 mask |= X86_BR_KERNEL;
> > > +       }
> > >
> > This will prevent regular users from capturing kernel -> kernel branches.
> > But it won't prevent users from getting kernel -> user branches. Thus
> > some kernel address will still be captured. I guess they could be eliminated
> > by the sw_filter.
> > 
> > When using LBR priv level filtering, the filter applies to the branch target
> > only.
> 
> How about something like the below? It also adds the branch flags
> Mikey wanted for PowerPC.
> 
> ---
>  arch/x86/kernel/cpu/perf_event_intel_lbr.c | 12 +++++++++---
>  include/linux/perf_event.h                 | 10 +++++++---
>  2 files changed, 16 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/perf_event_intel_lbr.c b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> index d978353..f44d635 100644
> --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> @@ -585,17 +585,23 @@ intel_pmu_lbr_filter(struct cpu_hw_events *cpuc)
>  
>  		/* if type does not correspond, then discard */
>  		if (type == X86_BR_NONE || (br_sel & type) != type) {
> -			cpuc->lbr_entries[i].from = 0;
> +			cpuc->lbr_entries[i].__delete = 1;
>  			compress = true;
>  		}
> +
> +		/* hide kernel addresses if we're not privileged  */
> +		if (!(br_sel & X86_BR_KERNEL) && kernel_ip(from)) {
> +			cpuc->lbr_entries[i].from = -1L;
> +			cpuc->lbr_entries[i].invalid_from = 1;
> +		}
>  	}
>  
>  	if (!compress)
>  		return;
>  
> -	/* remove all entries with from=0 */
> +	/* remove all entries with __delete */
>  	for (i = 0; i < cpuc->lbr_stack.nr; ) {
> -		if (!cpuc->lbr_entries[i].from) {
> +		if (cpuc->lbr_entries[i].__delete) {
>  			j = i;
>  			while (++j < cpuc->lbr_stack.nr)
>  				cpuc->lbr_entries[j-1] = cpuc->lbr_entries[j];
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index f463a46..7acf1c9 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -77,9 +77,13 @@ struct perf_raw_record {
>  struct perf_branch_entry {
>  	__u64	from;
>  	__u64	to;
> -	__u64	mispred:1,  /* target mispredicted */
> -		predicted:1,/* target predicted */
> -		reserved:62;
> +	__u64	mispred:1,	/* target mispredicted		*/
> +		predicted:1,	/* target predicted		*/
> +		invalid_to:1,	/* @to isn't to be trusted	*/
> +		invalid_from:1, /* @from isn't to be trusted	*/

Thanks Peter.  One possible issue...

When the kernel has to read the branch from memory, there is no way for
it to know that it's the same one that the HW actually executed.  Hence
there's a possibility that the to address is invalid but we can't tell
for sure.

I'm happy to just ignore that and mark calculated to address as valid,
unless you think it would be worthwhile extra information to pass onto
the user?

If we wanted this extra fidelity we could add a possibly_invalid_to:1
flag to your patch but I'm not sure it's worth it to be honest.

mikey

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
@ 2013-05-16 10:09         ` Michael Neuling
  0 siblings, 0 replies; 56+ messages in thread
From: Michael Neuling @ 2013-05-16 10:09 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: ak, LKML, Stephane Eranian, Linux PPC dev, Ingo Molnar

Peter Zijlstra <peterz@infradead.org> wrote:

> On Wed, May 15, 2013 at 03:37:22PM +0200, Stephane Eranian wrote:
> > On Fri, May 3, 2013 at 2:11 PM, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
> > > We should always have proper privileges when requesting kernel data.
> > >
> > > Cc: Andi Kleen <ak@linux.intel.com>
> > > Cc: eranian@google.com
> > > Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> > > Link: http://lkml.kernel.org/n/tip-v0x9ky3ahzr6nm3c6ilwrili@git.kernel.org
> > > ---
> > >  arch/x86/kernel/cpu/perf_event_intel_lbr.c |    5 ++++-
> > >  1 file changed, 4 insertions(+), 1 deletion(-)
> > >
> > > --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> > > +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> > > @@ -318,8 +318,11 @@ static void intel_pmu_setup_sw_lbr_filte
> > >         if (br_type & PERF_SAMPLE_BRANCH_USER)
> > >                 mask |= X86_BR_USER;
> > >
> > > -       if (br_type & PERF_SAMPLE_BRANCH_KERNEL)
> > > +       if (br_type & PERF_SAMPLE_BRANCH_KERNEL) {
> > > +               if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
> > > +                       return -EACCES;
> > >                 mask |= X86_BR_KERNEL;
> > > +       }
> > >
> > This will prevent regular users from capturing kernel -> kernel branches.
> > But it won't prevent users from getting kernel -> user branches. Thus
> > some kernel address will still be captured. I guess they could be eliminated
> > by the sw_filter.
> > 
> > When using LBR priv level filtering, the filter applies to the branch target
> > only.
> 
> How about something like the below? It also adds the branch flags
> Mikey wanted for PowerPC.
> 
> ---
>  arch/x86/kernel/cpu/perf_event_intel_lbr.c | 12 +++++++++---
>  include/linux/perf_event.h                 | 10 +++++++---
>  2 files changed, 16 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/perf_event_intel_lbr.c b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> index d978353..f44d635 100644
> --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> @@ -585,17 +585,23 @@ intel_pmu_lbr_filter(struct cpu_hw_events *cpuc)
>  
>  		/* if type does not correspond, then discard */
>  		if (type == X86_BR_NONE || (br_sel & type) != type) {
> -			cpuc->lbr_entries[i].from = 0;
> +			cpuc->lbr_entries[i].__delete = 1;
>  			compress = true;
>  		}
> +
> +		/* hide kernel addresses if we're not privileged  */
> +		if (!(br_sel & X86_BR_KERNEL) && kernel_ip(from)) {
> +			cpuc->lbr_entries[i].from = -1L;
> +			cpuc->lbr_entries[i].invalid_from = 1;
> +		}
>  	}
>  
>  	if (!compress)
>  		return;
>  
> -	/* remove all entries with from=0 */
> +	/* remove all entries with __delete */
>  	for (i = 0; i < cpuc->lbr_stack.nr; ) {
> -		if (!cpuc->lbr_entries[i].from) {
> +		if (cpuc->lbr_entries[i].__delete) {
>  			j = i;
>  			while (++j < cpuc->lbr_stack.nr)
>  				cpuc->lbr_entries[j-1] = cpuc->lbr_entries[j];
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index f463a46..7acf1c9 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -77,9 +77,13 @@ struct perf_raw_record {
>  struct perf_branch_entry {
>  	__u64	from;
>  	__u64	to;
> -	__u64	mispred:1,  /* target mispredicted */
> -		predicted:1,/* target predicted */
> -		reserved:62;
> +	__u64	mispred:1,	/* target mispredicted		*/
> +		predicted:1,	/* target predicted		*/
> +		invalid_to:1,	/* @to isn't to be trusted	*/
> +		invalid_from:1, /* @from isn't to be trusted	*/

Thanks Peter.  One possible issue...

When the kernel has to read the branch from memory, there is no way for
it to know that it's the same one that the HW actually executed.  Hence
there's a possibility that the to address is invalid but we can't tell
for sure.

I'm happy to just ignore that and mark calculated to address as valid,
unless you think it would be worthwhile extra information to pass onto
the user?

If we wanted this extra fidelity we could add a possibly_invalid_to:1
flag to your patch but I'm not sure it's worth it to be honest.

mikey

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-16  9:09     ` Peter Zijlstra
@ 2013-05-16 10:15         ` Michael Neuling
  2013-05-16 10:09         ` Michael Neuling
  2013-05-16 10:15         ` Michael Neuling
  2 siblings, 0 replies; 56+ messages in thread
From: Michael Neuling @ 2013-05-16 10:15 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Stephane Eranian, Ingo Molnar, LKML, ak, michael, benh, Linux PPC dev

Peter Zijlstra <peterz@infradead.org> wrote:

> On Wed, May 15, 2013 at 03:37:22PM +0200, Stephane Eranian wrote:
> > On Fri, May 3, 2013 at 2:11 PM, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
> > > We should always have proper privileges when requesting kernel data.
> > >
> > > Cc: Andi Kleen <ak@linux.intel.com>
> > > Cc: eranian@google.com
> > > Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> > > Link: http://lkml.kernel.org/n/tip-v0x9ky3ahzr6nm3c6ilwrili@git.kernel.org
> > > ---
> > >  arch/x86/kernel/cpu/perf_event_intel_lbr.c |    5 ++++-
> > >  1 file changed, 4 insertions(+), 1 deletion(-)
> > >
> > > --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> > > +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> > > @@ -318,8 +318,11 @@ static void intel_pmu_setup_sw_lbr_filte
> > >         if (br_type & PERF_SAMPLE_BRANCH_USER)
> > >                 mask |= X86_BR_USER;
> > >
> > > -       if (br_type & PERF_SAMPLE_BRANCH_KERNEL)
> > > +       if (br_type & PERF_SAMPLE_BRANCH_KERNEL) {
> > > +               if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
> > > +                       return -EACCES;
> > >                 mask |= X86_BR_KERNEL;
> > > +       }
> > >
> > This will prevent regular users from capturing kernel -> kernel branches.
> > But it won't prevent users from getting kernel -> user branches. Thus
> > some kernel address will still be captured. I guess they could be eliminated
> > by the sw_filter.
> > 
> > When using LBR priv level filtering, the filter applies to the branch target
> > only.
> 
> How about something like the below? It also adds the branch flags
> Mikey wanted for PowerPC.

Peter,

BTW PowerPC also has the ability to filter on conditional branches.  Any
chance we could add something like the follow to perf also?

Mikey

diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index fb104e5..891c769 100644
--- a/include/uapi/linux/perf_event.h
+++ b/include/uapi/linux/perf_event.h
@@ -157,8 +157,9 @@ enum perf_branch_sample_type {
 	PERF_SAMPLE_BRANCH_ANY_CALL	= 1U << 4, /* any call branch */
 	PERF_SAMPLE_BRANCH_ANY_RETURN	= 1U << 5, /* any return branch */
 	PERF_SAMPLE_BRANCH_IND_CALL	= 1U << 6, /* indirect calls */
+	PERF_SAMPLE_BRANCH_CONDITIONAL  = 1U << 7, /* conditional branches */
 
-	PERF_SAMPLE_BRANCH_MAX		= 1U << 7, /* non-ABI */
+	PERF_SAMPLE_BRANCH_MAX		= 1U << 8, /* non-ABI */
 };
 
 #define PERF_SAMPLE_BRANCH_PLM_ALL \
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index cdf58ec..5b0b89d 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -676,6 +676,7 @@ static const struct branch_mode branch_modes[] = {
 	BRANCH_OPT("any_call", PERF_SAMPLE_BRANCH_ANY_CALL),
 	BRANCH_OPT("any_ret", PERF_SAMPLE_BRANCH_ANY_RETURN),
 	BRANCH_OPT("ind_call", PERF_SAMPLE_BRANCH_IND_CALL),
+	BRANCH_OPT("cnd", PERF_SAMPLE_BRANCH_CONDITIONAL),
 	BRANCH_END
 };
 

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
@ 2013-05-16 10:15         ` Michael Neuling
  0 siblings, 0 replies; 56+ messages in thread
From: Michael Neuling @ 2013-05-16 10:15 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: ak, LKML, Stephane Eranian, Linux PPC dev, Ingo Molnar

Peter Zijlstra <peterz@infradead.org> wrote:

> On Wed, May 15, 2013 at 03:37:22PM +0200, Stephane Eranian wrote:
> > On Fri, May 3, 2013 at 2:11 PM, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
> > > We should always have proper privileges when requesting kernel data.
> > >
> > > Cc: Andi Kleen <ak@linux.intel.com>
> > > Cc: eranian@google.com
> > > Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> > > Link: http://lkml.kernel.org/n/tip-v0x9ky3ahzr6nm3c6ilwrili@git.kernel.org
> > > ---
> > >  arch/x86/kernel/cpu/perf_event_intel_lbr.c |    5 ++++-
> > >  1 file changed, 4 insertions(+), 1 deletion(-)
> > >
> > > --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> > > +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> > > @@ -318,8 +318,11 @@ static void intel_pmu_setup_sw_lbr_filte
> > >         if (br_type & PERF_SAMPLE_BRANCH_USER)
> > >                 mask |= X86_BR_USER;
> > >
> > > -       if (br_type & PERF_SAMPLE_BRANCH_KERNEL)
> > > +       if (br_type & PERF_SAMPLE_BRANCH_KERNEL) {
> > > +               if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
> > > +                       return -EACCES;
> > >                 mask |= X86_BR_KERNEL;
> > > +       }
> > >
> > This will prevent regular users from capturing kernel -> kernel branches.
> > But it won't prevent users from getting kernel -> user branches. Thus
> > some kernel address will still be captured. I guess they could be eliminated
> > by the sw_filter.
> > 
> > When using LBR priv level filtering, the filter applies to the branch target
> > only.
> 
> How about something like the below? It also adds the branch flags
> Mikey wanted for PowerPC.

Peter,

BTW PowerPC also has the ability to filter on conditional branches.  Any
chance we could add something like the follow to perf also?

Mikey

diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index fb104e5..891c769 100644
--- a/include/uapi/linux/perf_event.h
+++ b/include/uapi/linux/perf_event.h
@@ -157,8 +157,9 @@ enum perf_branch_sample_type {
 	PERF_SAMPLE_BRANCH_ANY_CALL	= 1U << 4, /* any call branch */
 	PERF_SAMPLE_BRANCH_ANY_RETURN	= 1U << 5, /* any return branch */
 	PERF_SAMPLE_BRANCH_IND_CALL	= 1U << 6, /* indirect calls */
+	PERF_SAMPLE_BRANCH_CONDITIONAL  = 1U << 7, /* conditional branches */
 
-	PERF_SAMPLE_BRANCH_MAX		= 1U << 7, /* non-ABI */
+	PERF_SAMPLE_BRANCH_MAX		= 1U << 8, /* non-ABI */
 };
 
 #define PERF_SAMPLE_BRANCH_PLM_ALL \
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index cdf58ec..5b0b89d 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -676,6 +676,7 @@ static const struct branch_mode branch_modes[] = {
 	BRANCH_OPT("any_call", PERF_SAMPLE_BRANCH_ANY_CALL),
 	BRANCH_OPT("any_ret", PERF_SAMPLE_BRANCH_ANY_RETURN),
 	BRANCH_OPT("ind_call", PERF_SAMPLE_BRANCH_IND_CALL),
+	BRANCH_OPT("cnd", PERF_SAMPLE_BRANCH_CONDITIONAL),
 	BRANCH_END
 };
 

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-16 10:15         ` Michael Neuling
@ 2013-05-16 11:16           ` Peter Zijlstra
  -1 siblings, 0 replies; 56+ messages in thread
From: Peter Zijlstra @ 2013-05-16 11:16 UTC (permalink / raw)
  To: Michael Neuling
  Cc: Stephane Eranian, Ingo Molnar, LKML, ak, michael, benh, Linux PPC dev

On Thu, May 16, 2013 at 08:15:17PM +1000, Michael Neuling wrote:
> Peter,
> 
> BTW PowerPC also has the ability to filter on conditional branches.  Any
> chance we could add something like the follow to perf also?
> 

I don't see an immediate problem with that except that we on x86 need to
implement that in the software filter. Stephane do you see any
fundamental issue with that?

> 
> diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
> index fb104e5..891c769 100644
> --- a/include/uapi/linux/perf_event.h
> +++ b/include/uapi/linux/perf_event.h
> @@ -157,8 +157,9 @@ enum perf_branch_sample_type {
>  	PERF_SAMPLE_BRANCH_ANY_CALL	= 1U << 4, /* any call branch */
>  	PERF_SAMPLE_BRANCH_ANY_RETURN	= 1U << 5, /* any return branch */
>  	PERF_SAMPLE_BRANCH_IND_CALL	= 1U << 6, /* indirect calls */
> +	PERF_SAMPLE_BRANCH_CONDITIONAL  = 1U << 7, /* conditional branches */
>  
> -	PERF_SAMPLE_BRANCH_MAX		= 1U << 7, /* non-ABI */
> +	PERF_SAMPLE_BRANCH_MAX		= 1U << 8, /* non-ABI */
>  };
>  
>  #define PERF_SAMPLE_BRANCH_PLM_ALL \
> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
> index cdf58ec..5b0b89d 100644
> --- a/tools/perf/builtin-record.c
> +++ b/tools/perf/builtin-record.c
> @@ -676,6 +676,7 @@ static const struct branch_mode branch_modes[] = {
>  	BRANCH_OPT("any_call", PERF_SAMPLE_BRANCH_ANY_CALL),
>  	BRANCH_OPT("any_ret", PERF_SAMPLE_BRANCH_ANY_RETURN),
>  	BRANCH_OPT("ind_call", PERF_SAMPLE_BRANCH_IND_CALL),
> +	BRANCH_OPT("cnd", PERF_SAMPLE_BRANCH_CONDITIONAL),
>  	BRANCH_END
>  };
>  

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
@ 2013-05-16 11:16           ` Peter Zijlstra
  0 siblings, 0 replies; 56+ messages in thread
From: Peter Zijlstra @ 2013-05-16 11:16 UTC (permalink / raw)
  To: Michael Neuling; +Cc: ak, LKML, Stephane Eranian, Linux PPC dev, Ingo Molnar

On Thu, May 16, 2013 at 08:15:17PM +1000, Michael Neuling wrote:
> Peter,
> 
> BTW PowerPC also has the ability to filter on conditional branches.  Any
> chance we could add something like the follow to perf also?
> 

I don't see an immediate problem with that except that we on x86 need to
implement that in the software filter. Stephane do you see any
fundamental issue with that?

> 
> diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
> index fb104e5..891c769 100644
> --- a/include/uapi/linux/perf_event.h
> +++ b/include/uapi/linux/perf_event.h
> @@ -157,8 +157,9 @@ enum perf_branch_sample_type {
>  	PERF_SAMPLE_BRANCH_ANY_CALL	= 1U << 4, /* any call branch */
>  	PERF_SAMPLE_BRANCH_ANY_RETURN	= 1U << 5, /* any return branch */
>  	PERF_SAMPLE_BRANCH_IND_CALL	= 1U << 6, /* indirect calls */
> +	PERF_SAMPLE_BRANCH_CONDITIONAL  = 1U << 7, /* conditional branches */
>  
> -	PERF_SAMPLE_BRANCH_MAX		= 1U << 7, /* non-ABI */
> +	PERF_SAMPLE_BRANCH_MAX		= 1U << 8, /* non-ABI */
>  };
>  
>  #define PERF_SAMPLE_BRANCH_PLM_ALL \
> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
> index cdf58ec..5b0b89d 100644
> --- a/tools/perf/builtin-record.c
> +++ b/tools/perf/builtin-record.c
> @@ -676,6 +676,7 @@ static const struct branch_mode branch_modes[] = {
>  	BRANCH_OPT("any_call", PERF_SAMPLE_BRANCH_ANY_CALL),
>  	BRANCH_OPT("any_ret", PERF_SAMPLE_BRANCH_ANY_RETURN),
>  	BRANCH_OPT("ind_call", PERF_SAMPLE_BRANCH_IND_CALL),
> +	BRANCH_OPT("cnd", PERF_SAMPLE_BRANCH_CONDITIONAL),
>  	BRANCH_END
>  };
>  

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-16 11:16           ` Peter Zijlstra
@ 2013-05-16 15:36             ` Stephane Eranian
  -1 siblings, 0 replies; 56+ messages in thread
From: Stephane Eranian @ 2013-05-16 15:36 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael Neuling, Ingo Molnar, LKML, ak, Michael Ellerman, benh,
	Linux PPC dev

On Thu, May 16, 2013 at 1:16 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Thu, May 16, 2013 at 08:15:17PM +1000, Michael Neuling wrote:
>> Peter,
>>
>> BTW PowerPC also has the ability to filter on conditional branches.  Any
>> chance we could add something like the follow to perf also?
>>
>
> I don't see an immediate problem with that except that we on x86 need to
> implement that in the software filter. Stephane do you see any
> fundamental issue with that?
>
On X86, the LBR cannot filter on conditional in HW. Thus as Peter said, it would
have to be done in SW. I did not add that because I think those branches are
not necessarily useful for tools.

>>
>> diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
>> index fb104e5..891c769 100644
>> --- a/include/uapi/linux/perf_event.h
>> +++ b/include/uapi/linux/perf_event.h
>> @@ -157,8 +157,9 @@ enum perf_branch_sample_type {
>>       PERF_SAMPLE_BRANCH_ANY_CALL     = 1U << 4, /* any call branch */
>>       PERF_SAMPLE_BRANCH_ANY_RETURN   = 1U << 5, /* any return branch */
>>       PERF_SAMPLE_BRANCH_IND_CALL     = 1U << 6, /* indirect calls */
>> +     PERF_SAMPLE_BRANCH_CONDITIONAL  = 1U << 7, /* conditional branches */
>>
>> -     PERF_SAMPLE_BRANCH_MAX          = 1U << 7, /* non-ABI */
>> +     PERF_SAMPLE_BRANCH_MAX          = 1U << 8, /* non-ABI */
>>  };
>>
>>  #define PERF_SAMPLE_BRANCH_PLM_ALL \
>> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
>> index cdf58ec..5b0b89d 100644
>> --- a/tools/perf/builtin-record.c
>> +++ b/tools/perf/builtin-record.c
>> @@ -676,6 +676,7 @@ static const struct branch_mode branch_modes[] = {
>>       BRANCH_OPT("any_call", PERF_SAMPLE_BRANCH_ANY_CALL),
>>       BRANCH_OPT("any_ret", PERF_SAMPLE_BRANCH_ANY_RETURN),
>>       BRANCH_OPT("ind_call", PERF_SAMPLE_BRANCH_IND_CALL),
>> +     BRANCH_OPT("cnd", PERF_SAMPLE_BRANCH_CONDITIONAL),
>>       BRANCH_END
>>  };
>>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
@ 2013-05-16 15:36             ` Stephane Eranian
  0 siblings, 0 replies; 56+ messages in thread
From: Stephane Eranian @ 2013-05-16 15:36 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Michael Neuling, ak, LKML, Linux PPC dev, Ingo Molnar

On Thu, May 16, 2013 at 1:16 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Thu, May 16, 2013 at 08:15:17PM +1000, Michael Neuling wrote:
>> Peter,
>>
>> BTW PowerPC also has the ability to filter on conditional branches.  Any
>> chance we could add something like the follow to perf also?
>>
>
> I don't see an immediate problem with that except that we on x86 need to
> implement that in the software filter. Stephane do you see any
> fundamental issue with that?
>
On X86, the LBR cannot filter on conditional in HW. Thus as Peter said, it would
have to be done in SW. I did not add that because I think those branches are
not necessarily useful for tools.

>>
>> diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
>> index fb104e5..891c769 100644
>> --- a/include/uapi/linux/perf_event.h
>> +++ b/include/uapi/linux/perf_event.h
>> @@ -157,8 +157,9 @@ enum perf_branch_sample_type {
>>       PERF_SAMPLE_BRANCH_ANY_CALL     = 1U << 4, /* any call branch */
>>       PERF_SAMPLE_BRANCH_ANY_RETURN   = 1U << 5, /* any return branch */
>>       PERF_SAMPLE_BRANCH_IND_CALL     = 1U << 6, /* indirect calls */
>> +     PERF_SAMPLE_BRANCH_CONDITIONAL  = 1U << 7, /* conditional branches */
>>
>> -     PERF_SAMPLE_BRANCH_MAX          = 1U << 7, /* non-ABI */
>> +     PERF_SAMPLE_BRANCH_MAX          = 1U << 8, /* non-ABI */
>>  };
>>
>>  #define PERF_SAMPLE_BRANCH_PLM_ALL \
>> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
>> index cdf58ec..5b0b89d 100644
>> --- a/tools/perf/builtin-record.c
>> +++ b/tools/perf/builtin-record.c
>> @@ -676,6 +676,7 @@ static const struct branch_mode branch_modes[] = {
>>       BRANCH_OPT("any_call", PERF_SAMPLE_BRANCH_ANY_CALL),
>>       BRANCH_OPT("any_ret", PERF_SAMPLE_BRANCH_ANY_RETURN),
>>       BRANCH_OPT("ind_call", PERF_SAMPLE_BRANCH_IND_CALL),
>> +     BRANCH_OPT("cnd", PERF_SAMPLE_BRANCH_CONDITIONAL),
>>       BRANCH_END
>>  };
>>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/3] perf, x86: Blacklist all MEM_*_RETIRED events for IVB
  2013-05-15 16:51         ` Peter Zijlstra
@ 2013-05-16 15:42           ` Stephane Eranian
  2013-05-16 16:07             ` Andi Kleen
  0 siblings, 1 reply; 56+ messages in thread
From: Stephane Eranian @ 2013-05-16 15:42 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Andi Kleen, Ingo Molnar, LKML

On Wed, May 15, 2013 at 6:51 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Wed, May 15, 2013 at 04:20:46PM +0200, Stephane Eranian wrote:
>> Hi,
>>
>> Based on our testing, it appears the corruption occurs only
>> when the MEM_* events are used and only on the sibling
>> counter. In other words, if HT0 has MEM_* in cntr0, then
>> HT1 cntr0 cannot be used, otherwise whatever is there may
>> get corrupted.
>
> Ah, great. That is the least horrid case :-)
>
Yeah.

>> So I think we could enhance Andi's initial patch
>> to handle this case instead of blacklist those events. They are
>> very important events.
>
> Will you take a stab at it? I suppose you'll have to make each counter
> have a shared resource and have the mem_*_retired events mark the
> resource taken.
>
Yes, I will try to come up with a patch to force multiplexing when those
events are used.

The difficulty is that we cannot use the shared_regs infrastructure because
the constraint is quite different. Here we need to say that if a mem event
uses cnt0 on one thread, then nothing can be using cnt0 on the other thread.
The current shared_regs infrastructure does not work on the counter level.
So we need to invent something else. It cannot even be extended. We need
to schedule the mem event, then look at the assignment and exclude all
the counters used by the mem event from the other thread. Alternatively,
we can only schedule mem event on counters which are not currently used
by the sibling thread. We need to peek at the used counters cross HT threads.
Not so easy to do.


> Also, did you test what happens when you turn SMT off in the BIOS so you
> get double the amount of counters; do we then leak into CNTn+4 or is all
> well again?
>
I have not tried that yet. Hopefully, it won't exhibit the problem.

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/3] perf, x86: Blacklist all MEM_*_RETIRED events for IVB
  2013-05-16 15:42           ` Stephane Eranian
@ 2013-05-16 16:07             ` Andi Kleen
  2013-05-16 16:26               ` Stephane Eranian
  0 siblings, 1 reply; 56+ messages in thread
From: Andi Kleen @ 2013-05-16 16:07 UTC (permalink / raw)
  To: Stephane Eranian; +Cc: Peter Zijlstra, Ingo Molnar, LKML

> The difficulty is that we cannot use the shared_regs infrastructure because
> the constraint is quite different. Here we need to say that if a mem event
> uses cnt0 on one thread, then nothing can be using cnt0 on the other thread.
> The current shared_regs infrastructure does not work on the counter level.
> So we need to invent something else. It cannot even be extended.

Just need a extra reg per counter, that is only triggered with that event.

-Andi

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/3] perf, x86: Blacklist all MEM_*_RETIRED events for IVB
  2013-05-16 16:07             ` Andi Kleen
@ 2013-05-16 16:26               ` Stephane Eranian
  0 siblings, 0 replies; 56+ messages in thread
From: Stephane Eranian @ 2013-05-16 16:26 UTC (permalink / raw)
  To: Andi Kleen; +Cc: Peter Zijlstra, Ingo Molnar, LKML

On Thu, May 16, 2013 at 6:07 PM, Andi Kleen <ak@linux.intel.com> wrote:
>> The difficulty is that we cannot use the shared_regs infrastructure because
>> the constraint is quite different. Here we need to say that if a mem event
>> uses cnt0 on one thread, then nothing can be using cnt0 on the other thread.
>> The current shared_regs infrastructure does not work on the counter level.
>> So we need to invent something else. It cannot even be extended.
>
> Just need a extra reg per counter, that is only triggered with that event.
>
Yes, it would be per counter and not per-event. But the check has to happen
after the assignment and each time we release the counter.

> -Andi

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-16 15:36             ` Stephane Eranian
@ 2013-05-17 11:12               ` Peter Zijlstra
  -1 siblings, 0 replies; 56+ messages in thread
From: Peter Zijlstra @ 2013-05-17 11:12 UTC (permalink / raw)
  To: Stephane Eranian
  Cc: Michael Neuling, Ingo Molnar, LKML, ak, Michael Ellerman, benh,
	Linux PPC dev

On Thu, May 16, 2013 at 05:36:11PM +0200, Stephane Eranian wrote:
> On Thu, May 16, 2013 at 1:16 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> > On Thu, May 16, 2013 at 08:15:17PM +1000, Michael Neuling wrote:
> >> Peter,
> >>
> >> BTW PowerPC also has the ability to filter on conditional branches.  Any
> >> chance we could add something like the follow to perf also?
> >>
> >
> > I don't see an immediate problem with that except that we on x86 need to
> > implement that in the software filter. Stephane do you see any
> > fundamental issue with that?
> >
> On X86, the LBR cannot filter on conditional in HW. Thus as Peter said, it would
> have to be done in SW. I did not add that because I think those branches are
> not necessarily useful for tools.

Wouldn't it be mostly conditional branches that are the primary control flow
and can get predicted wrong? I mean, I'm sure someone will miss-predict an
unconditional branch but its not like we care about people with such
afflictions do we?

Anyway, since PPC people thought it worth baking into hardware, presumably they
have a compelling use case. Mikey could you see if you can retrieve that from
someone in the know? It might be interesting.

Also, it looks like its trivial to add to x86, you seem to have already done
all the hard work by having X86_BR_JCC.

The only missing piece would be:

--- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
@@ -337,6 +337,10 @@ static int intel_pmu_setup_sw_lbr_filter
 
 	if (br_type & PERF_SAMPLE_BRANCH_IND_CALL)
 		mask |= X86_BR_IND_CALL;
+
+	if (br_type & PERF_SAMPLE_BRANCH_CONDITIONAL)
+		mask |= X86_BR_JCC;
+
 	/*
 	 * stash actual user request into reg, it may
 	 * be used by fixup code for some CPU

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
@ 2013-05-17 11:12               ` Peter Zijlstra
  0 siblings, 0 replies; 56+ messages in thread
From: Peter Zijlstra @ 2013-05-17 11:12 UTC (permalink / raw)
  To: Stephane Eranian; +Cc: Michael Neuling, ak, LKML, Linux PPC dev, Ingo Molnar

On Thu, May 16, 2013 at 05:36:11PM +0200, Stephane Eranian wrote:
> On Thu, May 16, 2013 at 1:16 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> > On Thu, May 16, 2013 at 08:15:17PM +1000, Michael Neuling wrote:
> >> Peter,
> >>
> >> BTW PowerPC also has the ability to filter on conditional branches.  Any
> >> chance we could add something like the follow to perf also?
> >>
> >
> > I don't see an immediate problem with that except that we on x86 need to
> > implement that in the software filter. Stephane do you see any
> > fundamental issue with that?
> >
> On X86, the LBR cannot filter on conditional in HW. Thus as Peter said, it would
> have to be done in SW. I did not add that because I think those branches are
> not necessarily useful for tools.

Wouldn't it be mostly conditional branches that are the primary control flow
and can get predicted wrong? I mean, I'm sure someone will miss-predict an
unconditional branch but its not like we care about people with such
afflictions do we?

Anyway, since PPC people thought it worth baking into hardware, presumably they
have a compelling use case. Mikey could you see if you can retrieve that from
someone in the know? It might be interesting.

Also, it looks like its trivial to add to x86, you seem to have already done
all the hard work by having X86_BR_JCC.

The only missing piece would be:

--- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
@@ -337,6 +337,10 @@ static int intel_pmu_setup_sw_lbr_filter
 
 	if (br_type & PERF_SAMPLE_BRANCH_IND_CALL)
 		mask |= X86_BR_IND_CALL;
+
+	if (br_type & PERF_SAMPLE_BRANCH_CONDITIONAL)
+		mask |= X86_BR_JCC;
+
 	/*
 	 * stash actual user request into reg, it may
 	 * be used by fixup code for some CPU

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-17 11:12               ` Peter Zijlstra
@ 2013-05-17 11:32                 ` Michael Neuling
  -1 siblings, 0 replies; 56+ messages in thread
From: Michael Neuling @ 2013-05-17 11:32 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Stephane Eranian, Ingo Molnar, LKML, ak, Michael Ellerman, benh,
	Linux PPC dev

Peter Zijlstra <peterz@infradead.org> wrote:

> On Thu, May 16, 2013 at 05:36:11PM +0200, Stephane Eranian wrote:
> > On Thu, May 16, 2013 at 1:16 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> > > On Thu, May 16, 2013 at 08:15:17PM +1000, Michael Neuling wrote:
> > >> Peter,
> > >>
> > >> BTW PowerPC also has the ability to filter on conditional branches.  Any
> > >> chance we could add something like the follow to perf also?
> > >>
> > >
> > > I don't see an immediate problem with that except that we on x86 need to
> > > implement that in the software filter. Stephane do you see any
> > > fundamental issue with that?
> > >
> > On X86, the LBR cannot filter on conditional in HW. Thus as Peter said, it would
> > have to be done in SW. I did not add that because I think those branches are
> > not necessarily useful for tools.
> 
> Wouldn't it be mostly conditional branches that are the primary control flow
> and can get predicted wrong? I mean, I'm sure someone will miss-predict an
> unconditional branch but its not like we care about people with such
> afflictions do we?

You could mispredict the target address of a computed goto.  You'd know
it was taken but not know target address until later in the pipeline.

On this, the POWER8 branch history buffer tells us two things about the
prediction status.  
  1) if the branch was predicted taken/not taken correctly
  2) if the target address was predicted correctly or not (for computed
     gotos only)
So we'd actually like more prediction bits too :-D

> Anyway, since PPC people thought it worth baking into hardware,
> presumably they have a compelling use case. Mikey could you see if you
> can retrieve that from someone in the know? It might be interesting.

I don't think we can mispredict a non-conditional non-computed but I'll
have to check with the HW folks.

Mikey

>
> Also, it looks like its trivial to add to x86, you seem to have already done
> all the hard work by having X86_BR_JCC.
> 
> The only missing piece would be:
> 
> --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> @@ -337,6 +337,10 @@ static int intel_pmu_setup_sw_lbr_filter
>  
>  	if (br_type & PERF_SAMPLE_BRANCH_IND_CALL)
>  		mask |= X86_BR_IND_CALL;
> +
> +	if (br_type & PERF_SAMPLE_BRANCH_CONDITIONAL)
> +		mask |= X86_BR_JCC;
> +
>  	/*
>  	 * stash actual user request into reg, it may
>  	 * be used by fixup code for some CPU
> 

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
@ 2013-05-17 11:32                 ` Michael Neuling
  0 siblings, 0 replies; 56+ messages in thread
From: Michael Neuling @ 2013-05-17 11:32 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: ak, LKML, Stephane Eranian, Linux PPC dev, Ingo Molnar

Peter Zijlstra <peterz@infradead.org> wrote:

> On Thu, May 16, 2013 at 05:36:11PM +0200, Stephane Eranian wrote:
> > On Thu, May 16, 2013 at 1:16 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> > > On Thu, May 16, 2013 at 08:15:17PM +1000, Michael Neuling wrote:
> > >> Peter,
> > >>
> > >> BTW PowerPC also has the ability to filter on conditional branches.  Any
> > >> chance we could add something like the follow to perf also?
> > >>
> > >
> > > I don't see an immediate problem with that except that we on x86 need to
> > > implement that in the software filter. Stephane do you see any
> > > fundamental issue with that?
> > >
> > On X86, the LBR cannot filter on conditional in HW. Thus as Peter said, it would
> > have to be done in SW. I did not add that because I think those branches are
> > not necessarily useful for tools.
> 
> Wouldn't it be mostly conditional branches that are the primary control flow
> and can get predicted wrong? I mean, I'm sure someone will miss-predict an
> unconditional branch but its not like we care about people with such
> afflictions do we?

You could mispredict the target address of a computed goto.  You'd know
it was taken but not know target address until later in the pipeline.

On this, the POWER8 branch history buffer tells us two things about the
prediction status.  
  1) if the branch was predicted taken/not taken correctly
  2) if the target address was predicted correctly or not (for computed
     gotos only)
So we'd actually like more prediction bits too :-D

> Anyway, since PPC people thought it worth baking into hardware,
> presumably they have a compelling use case. Mikey could you see if you
> can retrieve that from someone in the know? It might be interesting.

I don't think we can mispredict a non-conditional non-computed but I'll
have to check with the HW folks.

Mikey

>
> Also, it looks like its trivial to add to x86, you seem to have already done
> all the hard work by having X86_BR_JCC.
> 
> The only missing piece would be:
> 
> --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> @@ -337,6 +337,10 @@ static int intel_pmu_setup_sw_lbr_filter
>  
>  	if (br_type & PERF_SAMPLE_BRANCH_IND_CALL)
>  		mask |= X86_BR_IND_CALL;
> +
> +	if (br_type & PERF_SAMPLE_BRANCH_CONDITIONAL)
> +		mask |= X86_BR_JCC;
> +
>  	/*
>  	 * stash actual user request into reg, it may
>  	 * be used by fixup code for some CPU
> 

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-17 11:32                 ` Michael Neuling
@ 2013-05-17 11:39                   ` Peter Zijlstra
  -1 siblings, 0 replies; 56+ messages in thread
From: Peter Zijlstra @ 2013-05-17 11:39 UTC (permalink / raw)
  To: Michael Neuling
  Cc: Stephane Eranian, Ingo Molnar, LKML, ak, Michael Ellerman, benh,
	Linux PPC dev

On Fri, May 17, 2013 at 09:32:08PM +1000, Michael Neuling wrote:
> Peter Zijlstra <peterz@infradead.org> wrote:

> > Wouldn't it be mostly conditional branches that are the primary control flow
> > and can get predicted wrong? I mean, I'm sure someone will miss-predict an
> > unconditional branch but its not like we care about people with such
> > afflictions do we?
> 
> You could mispredict the target address of a computed goto.  You'd know
> it was taken but not know target address until later in the pipeline.

Oh right, computed targets could indeed be mis predicted. I was more thinking
about jumps with immediate values.

> On this, the POWER8 branch history buffer tells us two things about the
> prediction status.  
>   1) if the branch was predicted taken/not taken correctly
>   2) if the target address was predicted correctly or not (for computed
>      gotos only)
> So we'd actually like more prediction bits too :-D

So if I understand this right, 1) maps to the predicted flags we have; 2)
would be new stuff?

We don't really have anything like that on x86, but I suppose if you make the
thing optional and present a 'useful' use-case implemented in userspace code
we could take it :-)

> > Anyway, since PPC people thought it worth baking into hardware,
> > presumably they have a compelling use case. Mikey could you see if you
> > can retrieve that from someone in the know? It might be interesting.
> 
> I don't think we can mispredict a non-conditional non-computed but I'll
> have to check with the HW folks.

I was mostly wondering about the use-case for the conditional filter. Stephane
didn't think it useful, clearly your hardware guys thought different :-)

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
@ 2013-05-17 11:39                   ` Peter Zijlstra
  0 siblings, 0 replies; 56+ messages in thread
From: Peter Zijlstra @ 2013-05-17 11:39 UTC (permalink / raw)
  To: Michael Neuling; +Cc: ak, LKML, Stephane Eranian, Linux PPC dev, Ingo Molnar

On Fri, May 17, 2013 at 09:32:08PM +1000, Michael Neuling wrote:
> Peter Zijlstra <peterz@infradead.org> wrote:

> > Wouldn't it be mostly conditional branches that are the primary control flow
> > and can get predicted wrong? I mean, I'm sure someone will miss-predict an
> > unconditional branch but its not like we care about people with such
> > afflictions do we?
> 
> You could mispredict the target address of a computed goto.  You'd know
> it was taken but not know target address until later in the pipeline.

Oh right, computed targets could indeed be mis predicted. I was more thinking
about jumps with immediate values.

> On this, the POWER8 branch history buffer tells us two things about the
> prediction status.  
>   1) if the branch was predicted taken/not taken correctly
>   2) if the target address was predicted correctly or not (for computed
>      gotos only)
> So we'd actually like more prediction bits too :-D

So if I understand this right, 1) maps to the predicted flags we have; 2)
would be new stuff?

We don't really have anything like that on x86, but I suppose if you make the
thing optional and present a 'useful' use-case implemented in userspace code
we could take it :-)

> > Anyway, since PPC people thought it worth baking into hardware,
> > presumably they have a compelling use case. Mikey could you see if you
> > can retrieve that from someone in the know? It might be interesting.
> 
> I don't think we can mispredict a non-conditional non-computed but I'll
> have to check with the HW folks.

I was mostly wondering about the use-case for the conditional filter. Stephane
didn't think it useful, clearly your hardware guys thought different :-)

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-17 11:39                   ` Peter Zijlstra
@ 2013-05-17 21:39                     ` Stephane Eranian
  -1 siblings, 0 replies; 56+ messages in thread
From: Stephane Eranian @ 2013-05-17 21:39 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael Neuling, Ingo Molnar, LKML, ak, Michael Ellerman, benh,
	Linux PPC dev

On Fri, May 17, 2013 at 1:39 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Fri, May 17, 2013 at 09:32:08PM +1000, Michael Neuling wrote:
>> Peter Zijlstra <peterz@infradead.org> wrote:
>
>> > Wouldn't it be mostly conditional branches that are the primary control flow
>> > and can get predicted wrong? I mean, I'm sure someone will miss-predict an
>> > unconditional branch but its not like we care about people with such
>> > afflictions do we?
>>
>> You could mispredict the target address of a computed goto.  You'd know
>> it was taken but not know target address until later in the pipeline.
>
> Oh right, computed targets could indeed be mis predicted. I was more thinking
> about jumps with immediate values.
>
>> On this, the POWER8 branch history buffer tells us two things about the
>> prediction status.
>>   1) if the branch was predicted taken/not taken correctly
>>   2) if the target address was predicted correctly or not (for computed
>>      gotos only)
>> So we'd actually like more prediction bits too :-D
>
> So if I understand this right, 1) maps to the predicted flags we have; 2)
> would be new stuff?
>
> We don't really have anything like that on x86, but I suppose if you make the
> thing optional and present a 'useful' use-case implemented in userspace code
> we could take it :-)
>
>> > Anyway, since PPC people thought it worth baking into hardware,
>> > presumably they have a compelling use case. Mikey could you see if you
>> > can retrieve that from someone in the know? It might be interesting.
>>
>> I don't think we can mispredict a non-conditional non-computed but I'll
>> have to check with the HW folks.
>
> I was mostly wondering about the use-case for the conditional filter. Stephane
> didn't think it useful, clearly your hardware guys thought different :-)

>From my experience talking with compiler people, they care about ALL
the branches
and not the conditional so much. They use LBR to do basic block profiling.

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
@ 2013-05-17 21:39                     ` Stephane Eranian
  0 siblings, 0 replies; 56+ messages in thread
From: Stephane Eranian @ 2013-05-17 21:39 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Michael Neuling, ak, LKML, Linux PPC dev, Ingo Molnar

On Fri, May 17, 2013 at 1:39 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Fri, May 17, 2013 at 09:32:08PM +1000, Michael Neuling wrote:
>> Peter Zijlstra <peterz@infradead.org> wrote:
>
>> > Wouldn't it be mostly conditional branches that are the primary control flow
>> > and can get predicted wrong? I mean, I'm sure someone will miss-predict an
>> > unconditional branch but its not like we care about people with such
>> > afflictions do we?
>>
>> You could mispredict the target address of a computed goto.  You'd know
>> it was taken but not know target address until later in the pipeline.
>
> Oh right, computed targets could indeed be mis predicted. I was more thinking
> about jumps with immediate values.
>
>> On this, the POWER8 branch history buffer tells us two things about the
>> prediction status.
>>   1) if the branch was predicted taken/not taken correctly
>>   2) if the target address was predicted correctly or not (for computed
>>      gotos only)
>> So we'd actually like more prediction bits too :-D
>
> So if I understand this right, 1) maps to the predicted flags we have; 2)
> would be new stuff?
>
> We don't really have anything like that on x86, but I suppose if you make the
> thing optional and present a 'useful' use-case implemented in userspace code
> we could take it :-)
>
>> > Anyway, since PPC people thought it worth baking into hardware,
>> > presumably they have a compelling use case. Mikey could you see if you
>> > can retrieve that from someone in the know? It might be interesting.
>>
>> I don't think we can mispredict a non-conditional non-computed but I'll
>> have to check with the HW folks.
>
> I was mostly wondering about the use-case for the conditional filter. Stephane
> didn't think it useful, clearly your hardware guys thought different :-)

>From my experience talking with compiler people, they care about ALL
the branches
and not the conditional so much. They use LBR to do basic block profiling.

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-17 21:39                     ` Stephane Eranian
@ 2013-05-17 22:14                       ` Michael Neuling
  -1 siblings, 0 replies; 56+ messages in thread
From: Michael Neuling @ 2013-05-17 22:14 UTC (permalink / raw)
  To: Stephane Eranian
  Cc: Peter Zijlstra, Ingo Molnar, LKML, ak, Michael Ellerman, benh,
	Linux PPC dev

Stephane Eranian <eranian@google.com> wrote:

> On Fri, May 17, 2013 at 1:39 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> > On Fri, May 17, 2013 at 09:32:08PM +1000, Michael Neuling wrote:
> >> Peter Zijlstra <peterz@infradead.org> wrote:
> >
> >> > Wouldn't it be mostly conditional branches that are the primary control flow
> >> > and can get predicted wrong? I mean, I'm sure someone will miss-predict an
> >> > unconditional branch but its not like we care about people with such
> >> > afflictions do we?
> >>
> >> You could mispredict the target address of a computed goto.  You'd know
> >> it was taken but not know target address until later in the pipeline.
> >
> > Oh right, computed targets could indeed be mis predicted. I was more thinking
> > about jumps with immediate values.
> >
> >> On this, the POWER8 branch history buffer tells us two things about the
> >> prediction status.
> >>   1) if the branch was predicted taken/not taken correctly
> >>   2) if the target address was predicted correctly or not (for computed
> >>      gotos only)
> >> So we'd actually like more prediction bits too :-D
> >
> > So if I understand this right, 1) maps to the predicted flags we have; 2)
> > would be new stuff?
> >
> > We don't really have anything like that on x86, but I suppose if you make the
> > thing optional and present a 'useful' use-case implemented in userspace code
> > we could take it :-)
> >
> >> > Anyway, since PPC people thought it worth baking into hardware,
> >> > presumably they have a compelling use case. Mikey could you see if you
> >> > can retrieve that from someone in the know? It might be interesting.
> >>
> >> I don't think we can mispredict a non-conditional non-computed but I'll
> >> have to check with the HW folks.
> >
> > I was mostly wondering about the use-case for the conditional filter. Stephane
> > didn't think it useful, clearly your hardware guys thought different :-)
> 
> From my experience talking with compiler people, they care about ALL
> the branches and not the conditional so much. They use LBR to do basic
> block profiling.

OK.  I don't have a good handle on what's useful for compilers or JITs
right now.  I'm just plumbing through what's possible.

Mikey


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
@ 2013-05-17 22:14                       ` Michael Neuling
  0 siblings, 0 replies; 56+ messages in thread
From: Michael Neuling @ 2013-05-17 22:14 UTC (permalink / raw)
  To: Stephane Eranian; +Cc: ak, Peter Zijlstra, LKML, Linux PPC dev, Ingo Molnar

Stephane Eranian <eranian@google.com> wrote:

> On Fri, May 17, 2013 at 1:39 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> > On Fri, May 17, 2013 at 09:32:08PM +1000, Michael Neuling wrote:
> >> Peter Zijlstra <peterz@infradead.org> wrote:
> >
> >> > Wouldn't it be mostly conditional branches that are the primary control flow
> >> > and can get predicted wrong? I mean, I'm sure someone will miss-predict an
> >> > unconditional branch but its not like we care about people with such
> >> > afflictions do we?
> >>
> >> You could mispredict the target address of a computed goto.  You'd know
> >> it was taken but not know target address until later in the pipeline.
> >
> > Oh right, computed targets could indeed be mis predicted. I was more thinking
> > about jumps with immediate values.
> >
> >> On this, the POWER8 branch history buffer tells us two things about the
> >> prediction status.
> >>   1) if the branch was predicted taken/not taken correctly
> >>   2) if the target address was predicted correctly or not (for computed
> >>      gotos only)
> >> So we'd actually like more prediction bits too :-D
> >
> > So if I understand this right, 1) maps to the predicted flags we have; 2)
> > would be new stuff?
> >
> > We don't really have anything like that on x86, but I suppose if you make the
> > thing optional and present a 'useful' use-case implemented in userspace code
> > we could take it :-)
> >
> >> > Anyway, since PPC people thought it worth baking into hardware,
> >> > presumably they have a compelling use case. Mikey could you see if you
> >> > can retrieve that from someone in the know? It might be interesting.
> >>
> >> I don't think we can mispredict a non-conditional non-computed but I'll
> >> have to check with the HW folks.
> >
> > I was mostly wondering about the use-case for the conditional filter. Stephane
> > didn't think it useful, clearly your hardware guys thought different :-)
> 
> From my experience talking with compiler people, they care about ALL
> the branches and not the conditional so much. They use LBR to do basic
> block profiling.

OK.  I don't have a good handle on what's useful for compilers or JITs
right now.  I'm just plumbing through what's possible.

Mikey

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-17 22:14                       ` Michael Neuling
@ 2013-05-17 22:59                         ` Stephane Eranian
  -1 siblings, 0 replies; 56+ messages in thread
From: Stephane Eranian @ 2013-05-17 22:59 UTC (permalink / raw)
  To: Michael Neuling
  Cc: Peter Zijlstra, Ingo Molnar, LKML, ak, Michael Ellerman, benh,
	Linux PPC dev

On Sat, May 18, 2013 at 12:14 AM, Michael Neuling <mikey@neuling.org> wrote:
> Stephane Eranian <eranian@google.com> wrote:
>
>> On Fri, May 17, 2013 at 1:39 PM, Peter Zijlstra <peterz@infradead.org> wrote:
>> > On Fri, May 17, 2013 at 09:32:08PM +1000, Michael Neuling wrote:
>> >> Peter Zijlstra <peterz@infradead.org> wrote:
>> >
>> >> > Wouldn't it be mostly conditional branches that are the primary control flow
>> >> > and can get predicted wrong? I mean, I'm sure someone will miss-predict an
>> >> > unconditional branch but its not like we care about people with such
>> >> > afflictions do we?
>> >>
>> >> You could mispredict the target address of a computed goto.  You'd know
>> >> it was taken but not know target address until later in the pipeline.
>> >
>> > Oh right, computed targets could indeed be mis predicted. I was more thinking
>> > about jumps with immediate values.
>> >
>> >> On this, the POWER8 branch history buffer tells us two things about the
>> >> prediction status.
>> >>   1) if the branch was predicted taken/not taken correctly
>> >>   2) if the target address was predicted correctly or not (for computed
>> >>      gotos only)
>> >> So we'd actually like more prediction bits too :-D
>> >
>> > So if I understand this right, 1) maps to the predicted flags we have; 2)
>> > would be new stuff?
>> >
>> > We don't really have anything like that on x86, but I suppose if you make the
>> > thing optional and present a 'useful' use-case implemented in userspace code
>> > we could take it :-)
>> >
>> >> > Anyway, since PPC people thought it worth baking into hardware,
>> >> > presumably they have a compelling use case. Mikey could you see if you
>> >> > can retrieve that from someone in the know? It might be interesting.
>> >>
>> >> I don't think we can mispredict a non-conditional non-computed but I'll
>> >> have to check with the HW folks.
>> >
>> > I was mostly wondering about the use-case for the conditional filter. Stephane
>> > didn't think it useful, clearly your hardware guys thought different :-)
>>
>> From my experience talking with compiler people, they care about ALL
>> the branches and not the conditional so much. They use LBR to do basic
>> block profiling.
>
> OK.  I don't have a good handle on what's useful for compilers or JITs
> right now.  I'm just plumbing through what's possible.
>
I understand. It is okay to extend the interface.

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
@ 2013-05-17 22:59                         ` Stephane Eranian
  0 siblings, 0 replies; 56+ messages in thread
From: Stephane Eranian @ 2013-05-17 22:59 UTC (permalink / raw)
  To: Michael Neuling; +Cc: ak, Peter Zijlstra, LKML, Linux PPC dev, Ingo Molnar

On Sat, May 18, 2013 at 12:14 AM, Michael Neuling <mikey@neuling.org> wrote:
> Stephane Eranian <eranian@google.com> wrote:
>
>> On Fri, May 17, 2013 at 1:39 PM, Peter Zijlstra <peterz@infradead.org> wrote:
>> > On Fri, May 17, 2013 at 09:32:08PM +1000, Michael Neuling wrote:
>> >> Peter Zijlstra <peterz@infradead.org> wrote:
>> >
>> >> > Wouldn't it be mostly conditional branches that are the primary control flow
>> >> > and can get predicted wrong? I mean, I'm sure someone will miss-predict an
>> >> > unconditional branch but its not like we care about people with such
>> >> > afflictions do we?
>> >>
>> >> You could mispredict the target address of a computed goto.  You'd know
>> >> it was taken but not know target address until later in the pipeline.
>> >
>> > Oh right, computed targets could indeed be mis predicted. I was more thinking
>> > about jumps with immediate values.
>> >
>> >> On this, the POWER8 branch history buffer tells us two things about the
>> >> prediction status.
>> >>   1) if the branch was predicted taken/not taken correctly
>> >>   2) if the target address was predicted correctly or not (for computed
>> >>      gotos only)
>> >> So we'd actually like more prediction bits too :-D
>> >
>> > So if I understand this right, 1) maps to the predicted flags we have; 2)
>> > would be new stuff?
>> >
>> > We don't really have anything like that on x86, but I suppose if you make the
>> > thing optional and present a 'useful' use-case implemented in userspace code
>> > we could take it :-)
>> >
>> >> > Anyway, since PPC people thought it worth baking into hardware,
>> >> > presumably they have a compelling use case. Mikey could you see if you
>> >> > can retrieve that from someone in the know? It might be interesting.
>> >>
>> >> I don't think we can mispredict a non-conditional non-computed but I'll
>> >> have to check with the HW folks.
>> >
>> > I was mostly wondering about the use-case for the conditional filter. Stephane
>> > didn't think it useful, clearly your hardware guys thought different :-)
>>
>> From my experience talking with compiler people, they care about ALL
>> the branches and not the conditional so much. They use LBR to do basic
>> block profiling.
>
> OK.  I don't have a good handle on what's useful for compilers or JITs
> right now.  I'm just plumbing through what's possible.
>
I understand. It is okay to extend the interface.

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-17 11:12               ` Peter Zijlstra
@ 2013-05-21  5:41                 ` Michael Neuling
  -1 siblings, 0 replies; 56+ messages in thread
From: Michael Neuling @ 2013-05-21  5:41 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: anshuman Stephane Eranian, Ingo Molnar, LKML, ak,
	Michael Ellerman, benh, Linux PPC dev

Peter Zijlstra <peterz@infradead.org> wrote:

> On Thu, May 16, 2013 at 05:36:11PM +0200, Stephane Eranian wrote:
> > On Thu, May 16, 2013 at 1:16 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> > > On Thu, May 16, 2013 at 08:15:17PM +1000, Michael Neuling wrote:
> > >> Peter,
> > >>
> > >> BTW PowerPC also has the ability to filter on conditional branches.  Any
> > >> chance we could add something like the follow to perf also?
> > >>
> > >
> > > I don't see an immediate problem with that except that we on x86 need to
> > > implement that in the software filter. Stephane do you see any
> > > fundamental issue with that?
> > >
> > On X86, the LBR cannot filter on conditional in HW. Thus as Peter said, it would
> > have to be done in SW. I did not add that because I think those branches are
> > not necessarily useful for tools.
> 
> Wouldn't it be mostly conditional branches that are the primary control flow
> and can get predicted wrong? I mean, I'm sure someone will miss-predict an
> unconditional branch but its not like we care about people with such
> afflictions do we?
> 
> Anyway, since PPC people thought it worth baking into hardware, presumably they
> have a compelling use case. Mikey could you see if you can retrieve that from
> someone in the know? It might be interesting.
> 
> Also, it looks like its trivial to add to x86, you seem to have already done
> all the hard work by having X86_BR_JCC.
> 
> The only missing piece would be:

Peter,

Can we add your signed-off-by on this?

We are cleaning up our series for conditional branches and would like to
add this as part of the post.

Mikey


> 
> --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> @@ -337,6 +337,10 @@ static int intel_pmu_setup_sw_lbr_filter
>  
>  	if (br_type & PERF_SAMPLE_BRANCH_IND_CALL)
>  		mask |= X86_BR_IND_CALL;
> +
> +	if (br_type & PERF_SAMPLE_BRANCH_CONDITIONAL)
> +		mask |= X86_BR_JCC;
> +
>  	/*
>  	 * stash actual user request into reg, it may
>  	 * be used by fixup code for some CPU
> 

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
@ 2013-05-21  5:41                 ` Michael Neuling
  0 siblings, 0 replies; 56+ messages in thread
From: Michael Neuling @ 2013-05-21  5:41 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: ak, LKML, anshuman Stephane Eranian, Linux PPC dev, Ingo Molnar

Peter Zijlstra <peterz@infradead.org> wrote:

> On Thu, May 16, 2013 at 05:36:11PM +0200, Stephane Eranian wrote:
> > On Thu, May 16, 2013 at 1:16 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> > > On Thu, May 16, 2013 at 08:15:17PM +1000, Michael Neuling wrote:
> > >> Peter,
> > >>
> > >> BTW PowerPC also has the ability to filter on conditional branches.  Any
> > >> chance we could add something like the follow to perf also?
> > >>
> > >
> > > I don't see an immediate problem with that except that we on x86 need to
> > > implement that in the software filter. Stephane do you see any
> > > fundamental issue with that?
> > >
> > On X86, the LBR cannot filter on conditional in HW. Thus as Peter said, it would
> > have to be done in SW. I did not add that because I think those branches are
> > not necessarily useful for tools.
> 
> Wouldn't it be mostly conditional branches that are the primary control flow
> and can get predicted wrong? I mean, I'm sure someone will miss-predict an
> unconditional branch but its not like we care about people with such
> afflictions do we?
> 
> Anyway, since PPC people thought it worth baking into hardware, presumably they
> have a compelling use case. Mikey could you see if you can retrieve that from
> someone in the know? It might be interesting.
> 
> Also, it looks like its trivial to add to x86, you seem to have already done
> all the hard work by having X86_BR_JCC.
> 
> The only missing piece would be:

Peter,

Can we add your signed-off-by on this?

We are cleaning up our series for conditional branches and would like to
add this as part of the post.

Mikey


> 
> --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> @@ -337,6 +337,10 @@ static int intel_pmu_setup_sw_lbr_filter
>  
>  	if (br_type & PERF_SAMPLE_BRANCH_IND_CALL)
>  		mask |= X86_BR_IND_CALL;
> +
> +	if (br_type & PERF_SAMPLE_BRANCH_CONDITIONAL)
> +		mask |= X86_BR_JCC;
> +
>  	/*
>  	 * stash actual user request into reg, it may
>  	 * be used by fixup code for some CPU
> 

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-21  5:41                 ` Michael Neuling
@ 2013-05-21  8:50                   ` Peter Zijlstra
  -1 siblings, 0 replies; 56+ messages in thread
From: Peter Zijlstra @ 2013-05-21  8:50 UTC (permalink / raw)
  To: Michael Neuling
  Cc: anshuman Stephane Eranian, Ingo Molnar, LKML, ak,
	Michael Ellerman, benh, Linux PPC dev

On Tue, May 21, 2013 at 03:41:35PM +1000, Michael Neuling wrote:
> Peter Zijlstra <peterz@infradead.org> wrote:
> 
> Can we add your signed-off-by on this?
> 
> We are cleaning up our series for conditional branches and would like to
> add this as part of the post.

Sure, but its completely untested.. I was hoping Stephane would say
somnething about it since he wrote all that magic ;-)

But yeah, feel free to add my SoB.

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
@ 2013-05-21  8:50                   ` Peter Zijlstra
  0 siblings, 0 replies; 56+ messages in thread
From: Peter Zijlstra @ 2013-05-21  8:50 UTC (permalink / raw)
  To: Michael Neuling
  Cc: ak, LKML, anshuman Stephane Eranian, Linux PPC dev, Ingo Molnar

On Tue, May 21, 2013 at 03:41:35PM +1000, Michael Neuling wrote:
> Peter Zijlstra <peterz@infradead.org> wrote:
> 
> Can we add your signed-off-by on this?
> 
> We are cleaning up our series for conditional branches and would like to
> add this as part of the post.

Sure, but its completely untested.. I was hoping Stephane would say
somnething about it since he wrote all that magic ;-)

But yeah, feel free to add my SoB.

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-21  8:50                   ` Peter Zijlstra
@ 2013-05-21 13:46                     ` Stephane Eranian
  -1 siblings, 0 replies; 56+ messages in thread
From: Stephane Eranian @ 2013-05-21 13:46 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael Neuling, Ingo Molnar, LKML, ak, Michael Ellerman, benh,
	Linux PPC dev

On Tue, May 21, 2013 at 10:50 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Tue, May 21, 2013 at 03:41:35PM +1000, Michael Neuling wrote:
>> Peter Zijlstra <peterz@infradead.org> wrote:
>>
>> Can we add your signed-off-by on this?
>>
>> We are cleaning up our series for conditional branches and would like to
>> add this as part of the post.
>
> Sure, but its completely untested.. I was hoping Stephane would say
> somnething about it since he wrote all that magic ;-)
>
Let me take a look at it.

> But yeah, feel free to add my SoB.

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
@ 2013-05-21 13:46                     ` Stephane Eranian
  0 siblings, 0 replies; 56+ messages in thread
From: Stephane Eranian @ 2013-05-21 13:46 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Michael Neuling, ak, LKML, Linux PPC dev, Ingo Molnar

On Tue, May 21, 2013 at 10:50 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Tue, May 21, 2013 at 03:41:35PM +1000, Michael Neuling wrote:
>> Peter Zijlstra <peterz@infradead.org> wrote:
>>
>> Can we add your signed-off-by on this?
>>
>> We are cleaning up our series for conditional branches and would like to
>> add this as part of the post.
>
> Sure, but its completely untested.. I was hoping Stephane would say
> somnething about it since he wrote all that magic ;-)
>
Let me take a look at it.

> But yeah, feel free to add my SoB.

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-16 10:15         ` Michael Neuling
@ 2013-05-21 13:55           ` Stephane Eranian
  -1 siblings, 0 replies; 56+ messages in thread
From: Stephane Eranian @ 2013-05-21 13:55 UTC (permalink / raw)
  To: Michael Neuling
  Cc: Peter Zijlstra, Ingo Molnar, LKML, ak, Michael Ellerman, benh,
	Linux PPC dev

On Thu, May 16, 2013 at 12:15 PM, Michael Neuling <mikey@neuling.org> wrote:
> Peter Zijlstra <peterz@infradead.org> wrote:
>
>> On Wed, May 15, 2013 at 03:37:22PM +0200, Stephane Eranian wrote:
>> > On Fri, May 3, 2013 at 2:11 PM, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
>> > > We should always have proper privileges when requesting kernel data.
>> > >
>> > > Cc: Andi Kleen <ak@linux.intel.com>
>> > > Cc: eranian@google.com
>> > > Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
>> > > Link: http://lkml.kernel.org/n/tip-v0x9ky3ahzr6nm3c6ilwrili@git.kernel.org
>> > > ---
>> > >  arch/x86/kernel/cpu/perf_event_intel_lbr.c |    5 ++++-
>> > >  1 file changed, 4 insertions(+), 1 deletion(-)
>> > >
>> > > --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
>> > > +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
>> > > @@ -318,8 +318,11 @@ static void intel_pmu_setup_sw_lbr_filte
>> > >         if (br_type & PERF_SAMPLE_BRANCH_USER)
>> > >                 mask |= X86_BR_USER;
>> > >
>> > > -       if (br_type & PERF_SAMPLE_BRANCH_KERNEL)
>> > > +       if (br_type & PERF_SAMPLE_BRANCH_KERNEL) {
>> > > +               if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
>> > > +                       return -EACCES;
>> > >                 mask |= X86_BR_KERNEL;
>> > > +       }
>> > >
>> > This will prevent regular users from capturing kernel -> kernel branches.
>> > But it won't prevent users from getting kernel -> user branches. Thus
>> > some kernel address will still be captured. I guess they could be eliminated
>> > by the sw_filter.
>> >
>> > When using LBR priv level filtering, the filter applies to the branch target
>> > only.
>>
>> How about something like the below? It also adds the branch flags
>> Mikey wanted for PowerPC.
>
> Peter,
>
> BTW PowerPC also has the ability to filter on conditional branches.  Any
> chance we could add something like the follow to perf also?
>
> Mikey
>
> diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
> index fb104e5..891c769 100644
> --- a/include/uapi/linux/perf_event.h
> +++ b/include/uapi/linux/perf_event.h
> @@ -157,8 +157,9 @@ enum perf_branch_sample_type {
>         PERF_SAMPLE_BRANCH_ANY_CALL     = 1U << 4, /* any call branch */
>         PERF_SAMPLE_BRANCH_ANY_RETURN   = 1U << 5, /* any return branch */
>         PERF_SAMPLE_BRANCH_IND_CALL     = 1U << 6, /* indirect calls */
> +       PERF_SAMPLE_BRANCH_CONDITIONAL  = 1U << 7, /* conditional branches */
>
I would use PERF_SAMPLE_BRANCH_COND here.

> -       PERF_SAMPLE_BRANCH_MAX          = 1U << 7, /* non-ABI */
> +       PERF_SAMPLE_BRANCH_MAX          = 1U << 8, /* non-ABI */
>  };
>
>  #define PERF_SAMPLE_BRANCH_PLM_ALL \
> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
> index cdf58ec..5b0b89d 100644
> --- a/tools/perf/builtin-record.c
> +++ b/tools/perf/builtin-record.c
> @@ -676,6 +676,7 @@ static const struct branch_mode branch_modes[] = {
>         BRANCH_OPT("any_call", PERF_SAMPLE_BRANCH_ANY_CALL),
>         BRANCH_OPT("any_ret", PERF_SAMPLE_BRANCH_ANY_RETURN),
>         BRANCH_OPT("ind_call", PERF_SAMPLE_BRANCH_IND_CALL),
> +       BRANCH_OPT("cnd", PERF_SAMPLE_BRANCH_CONDITIONAL),

use "cond"

>         BRANCH_END
>  };
>

And if you do this, you also need to update the x86
perf_event_intel_lbr.c mapping
tables to fill out the entries for PERF_SAMPLE_BRANCH_COND:

        [PERF_SAMPLE_BRANCH_COND]       = LBR_JCC,

And you also need to update intel_pmu_setup_sw_lbr_filter()
to handle the conversion to x86 instructions:

       if (br_type & PERF_SAMPLE_BRANCH_COND)
                mask |= X86_BR_JCC;


You also need to update the perf-record.txt documentation to list cond
as a possible
branch filter.

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
@ 2013-05-21 13:55           ` Stephane Eranian
  0 siblings, 0 replies; 56+ messages in thread
From: Stephane Eranian @ 2013-05-21 13:55 UTC (permalink / raw)
  To: Michael Neuling; +Cc: ak, Peter Zijlstra, LKML, Linux PPC dev, Ingo Molnar

On Thu, May 16, 2013 at 12:15 PM, Michael Neuling <mikey@neuling.org> wrote:
> Peter Zijlstra <peterz@infradead.org> wrote:
>
>> On Wed, May 15, 2013 at 03:37:22PM +0200, Stephane Eranian wrote:
>> > On Fri, May 3, 2013 at 2:11 PM, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
>> > > We should always have proper privileges when requesting kernel data.
>> > >
>> > > Cc: Andi Kleen <ak@linux.intel.com>
>> > > Cc: eranian@google.com
>> > > Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
>> > > Link: http://lkml.kernel.org/n/tip-v0x9ky3ahzr6nm3c6ilwrili@git.kernel.org
>> > > ---
>> > >  arch/x86/kernel/cpu/perf_event_intel_lbr.c |    5 ++++-
>> > >  1 file changed, 4 insertions(+), 1 deletion(-)
>> > >
>> > > --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
>> > > +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
>> > > @@ -318,8 +318,11 @@ static void intel_pmu_setup_sw_lbr_filte
>> > >         if (br_type & PERF_SAMPLE_BRANCH_USER)
>> > >                 mask |= X86_BR_USER;
>> > >
>> > > -       if (br_type & PERF_SAMPLE_BRANCH_KERNEL)
>> > > +       if (br_type & PERF_SAMPLE_BRANCH_KERNEL) {
>> > > +               if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
>> > > +                       return -EACCES;
>> > >                 mask |= X86_BR_KERNEL;
>> > > +       }
>> > >
>> > This will prevent regular users from capturing kernel -> kernel branches.
>> > But it won't prevent users from getting kernel -> user branches. Thus
>> > some kernel address will still be captured. I guess they could be eliminated
>> > by the sw_filter.
>> >
>> > When using LBR priv level filtering, the filter applies to the branch target
>> > only.
>>
>> How about something like the below? It also adds the branch flags
>> Mikey wanted for PowerPC.
>
> Peter,
>
> BTW PowerPC also has the ability to filter on conditional branches.  Any
> chance we could add something like the follow to perf also?
>
> Mikey
>
> diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
> index fb104e5..891c769 100644
> --- a/include/uapi/linux/perf_event.h
> +++ b/include/uapi/linux/perf_event.h
> @@ -157,8 +157,9 @@ enum perf_branch_sample_type {
>         PERF_SAMPLE_BRANCH_ANY_CALL     = 1U << 4, /* any call branch */
>         PERF_SAMPLE_BRANCH_ANY_RETURN   = 1U << 5, /* any return branch */
>         PERF_SAMPLE_BRANCH_IND_CALL     = 1U << 6, /* indirect calls */
> +       PERF_SAMPLE_BRANCH_CONDITIONAL  = 1U << 7, /* conditional branches */
>
I would use PERF_SAMPLE_BRANCH_COND here.

> -       PERF_SAMPLE_BRANCH_MAX          = 1U << 7, /* non-ABI */
> +       PERF_SAMPLE_BRANCH_MAX          = 1U << 8, /* non-ABI */
>  };
>
>  #define PERF_SAMPLE_BRANCH_PLM_ALL \
> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
> index cdf58ec..5b0b89d 100644
> --- a/tools/perf/builtin-record.c
> +++ b/tools/perf/builtin-record.c
> @@ -676,6 +676,7 @@ static const struct branch_mode branch_modes[] = {
>         BRANCH_OPT("any_call", PERF_SAMPLE_BRANCH_ANY_CALL),
>         BRANCH_OPT("any_ret", PERF_SAMPLE_BRANCH_ANY_RETURN),
>         BRANCH_OPT("ind_call", PERF_SAMPLE_BRANCH_IND_CALL),
> +       BRANCH_OPT("cnd", PERF_SAMPLE_BRANCH_CONDITIONAL),

use "cond"

>         BRANCH_END
>  };
>

And if you do this, you also need to update the x86
perf_event_intel_lbr.c mapping
tables to fill out the entries for PERF_SAMPLE_BRANCH_COND:

        [PERF_SAMPLE_BRANCH_COND]       = LBR_JCC,

And you also need to update intel_pmu_setup_sw_lbr_filter()
to handle the conversion to x86 instructions:

       if (br_type & PERF_SAMPLE_BRANCH_COND)
                mask |= X86_BR_JCC;


You also need to update the perf-record.txt documentation to list cond
as a possible
branch filter.

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-21 13:55           ` Stephane Eranian
  (?)
@ 2013-05-22  6:43           ` Anshuman Khandual
  2013-05-22 12:23             ` Stephane Eranian
  -1 siblings, 1 reply; 56+ messages in thread
From: Anshuman Khandual @ 2013-05-22  6:43 UTC (permalink / raw)
  To: Stephane Eranian
  Cc: Michael Neuling, ak, Peter Zijlstra, LKML, Linux PPC dev, Ingo Molnar

On 05/21/2013 07:25 PM, Stephane Eranian wrote:
> On Thu, May 16, 2013 at 12:15 PM, Michael Neuling <mikey@neuling.org> wrote:
>> Peter Zijlstra <peterz@infradead.org> wrote:
>>
>>> On Wed, May 15, 2013 at 03:37:22PM +0200, Stephane Eranian wrote:
>>>> On Fri, May 3, 2013 at 2:11 PM, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
>>>>> We should always have proper privileges when requesting kernel data.
>>>>>
>>>>> Cc: Andi Kleen <ak@linux.intel.com>
>>>>> Cc: eranian@google.com
>>>>> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
>>>>> Link: http://lkml.kernel.org/n/tip-v0x9ky3ahzr6nm3c6ilwrili@git.kernel.org
>>>>> ---
>>>>>  arch/x86/kernel/cpu/perf_event_intel_lbr.c |    5 ++++-
>>>>>  1 file changed, 4 insertions(+), 1 deletion(-)
>>>>>
>>>>> --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
>>>>> +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
>>>>> @@ -318,8 +318,11 @@ static void intel_pmu_setup_sw_lbr_filte
>>>>>         if (br_type & PERF_SAMPLE_BRANCH_USER)
>>>>>                 mask |= X86_BR_USER;
>>>>>
>>>>> -       if (br_type & PERF_SAMPLE_BRANCH_KERNEL)
>>>>> +       if (br_type & PERF_SAMPLE_BRANCH_KERNEL) {
>>>>> +               if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
>>>>> +                       return -EACCES;
>>>>>                 mask |= X86_BR_KERNEL;
>>>>> +       }
>>>>>
>>>> This will prevent regular users from capturing kernel -> kernel branches.
>>>> But it won't prevent users from getting kernel -> user branches. Thus
>>>> some kernel address will still be captured. I guess they could be eliminated
>>>> by the sw_filter.
>>>>
>>>> When using LBR priv level filtering, the filter applies to the branch target
>>>> only.
>>>
>>> How about something like the below? It also adds the branch flags
>>> Mikey wanted for PowerPC.
>>
>> Peter,
>>
>> BTW PowerPC also has the ability to filter on conditional branches.  Any
>> chance we could add something like the follow to perf also?
>>
>> Mikey
>>
>> diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
>> index fb104e5..891c769 100644
>> --- a/include/uapi/linux/perf_event.h
>> +++ b/include/uapi/linux/perf_event.h
>> @@ -157,8 +157,9 @@ enum perf_branch_sample_type {
>>         PERF_SAMPLE_BRANCH_ANY_CALL     = 1U << 4, /* any call branch */
>>         PERF_SAMPLE_BRANCH_ANY_RETURN   = 1U << 5, /* any return branch */
>>         PERF_SAMPLE_BRANCH_IND_CALL     = 1U << 6, /* indirect calls */
>> +       PERF_SAMPLE_BRANCH_CONDITIONAL  = 1U << 7, /* conditional branches */
>>
> I would use PERF_SAMPLE_BRANCH_COND here.
> 
>> -       PERF_SAMPLE_BRANCH_MAX          = 1U << 7, /* non-ABI */
>> +       PERF_SAMPLE_BRANCH_MAX          = 1U << 8, /* non-ABI */
>>  };
>>
>>  #define PERF_SAMPLE_BRANCH_PLM_ALL \
>> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
>> index cdf58ec..5b0b89d 100644
>> --- a/tools/perf/builtin-record.c
>> +++ b/tools/perf/builtin-record.c
>> @@ -676,6 +676,7 @@ static const struct branch_mode branch_modes[] = {
>>         BRANCH_OPT("any_call", PERF_SAMPLE_BRANCH_ANY_CALL),
>>         BRANCH_OPT("any_ret", PERF_SAMPLE_BRANCH_ANY_RETURN),
>>         BRANCH_OPT("ind_call", PERF_SAMPLE_BRANCH_IND_CALL),
>> +       BRANCH_OPT("cnd", PERF_SAMPLE_BRANCH_CONDITIONAL),
> 
> use "cond"
> 
>>         BRANCH_END
>>  };
>>
> 
> And if you do this, you also need to update the x86
> perf_event_intel_lbr.c mapping
> tables to fill out the entries for PERF_SAMPLE_BRANCH_COND:
> 
>         [PERF_SAMPLE_BRANCH_COND]       = LBR_JCC,
> 
> And you also need to update intel_pmu_setup_sw_lbr_filter()
> to handle the conversion to x86 instructions:
> 
>        if (br_type & PERF_SAMPLE_BRANCH_COND)
>                 mask |= X86_BR_JCC;
> 
> 
> You also need to update the perf-record.txt documentation to list cond
> as a possible
> branch filter.

Hey Stephane,

I have incorporated all the review comments into the patch series
https://lkml.org/lkml/2013/5/22/51.

Regards
Anshuman


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-22  6:43           ` Anshuman Khandual
@ 2013-05-22 12:23             ` Stephane Eranian
  2013-05-22 14:51               ` Anshuman Khandual
  0 siblings, 1 reply; 56+ messages in thread
From: Stephane Eranian @ 2013-05-22 12:23 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: Michael Neuling, ak, Peter Zijlstra, LKML, Linux PPC dev, Ingo Molnar

Hi,



On Wed, May 22, 2013 at 8:43 AM, Anshuman Khandual
<khandual@linux.vnet.ibm.com> wrote:
> On 05/21/2013 07:25 PM, Stephane Eranian wrote:
>> On Thu, May 16, 2013 at 12:15 PM, Michael Neuling <mikey@neuling.org> wrote:
>>> Peter Zijlstra <peterz@infradead.org> wrote:
>>>
>>>> On Wed, May 15, 2013 at 03:37:22PM +0200, Stephane Eranian wrote:
>>>>> On Fri, May 3, 2013 at 2:11 PM, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
>>>>>> We should always have proper privileges when requesting kernel data.
>>>>>>
>>>>>> Cc: Andi Kleen <ak@linux.intel.com>
>>>>>> Cc: eranian@google.com
>>>>>> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
>>>>>> Link: http://lkml.kernel.org/n/tip-v0x9ky3ahzr6nm3c6ilwrili@git.kernel.org
>>>>>> ---
>>>>>>  arch/x86/kernel/cpu/perf_event_intel_lbr.c |    5 ++++-
>>>>>>  1 file changed, 4 insertions(+), 1 deletion(-)
>>>>>>
>>>>>> --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
>>>>>> +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
>>>>>> @@ -318,8 +318,11 @@ static void intel_pmu_setup_sw_lbr_filte
>>>>>>         if (br_type & PERF_SAMPLE_BRANCH_USER)
>>>>>>                 mask |= X86_BR_USER;
>>>>>>
>>>>>> -       if (br_type & PERF_SAMPLE_BRANCH_KERNEL)
>>>>>> +       if (br_type & PERF_SAMPLE_BRANCH_KERNEL) {
>>>>>> +               if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
>>>>>> +                       return -EACCES;
>>>>>>                 mask |= X86_BR_KERNEL;
>>>>>> +       }
>>>>>>
>>>>> This will prevent regular users from capturing kernel -> kernel branches.
>>>>> But it won't prevent users from getting kernel -> user branches. Thus
>>>>> some kernel address will still be captured. I guess they could be eliminated
>>>>> by the sw_filter.
>>>>>
>>>>> When using LBR priv level filtering, the filter applies to the branch target
>>>>> only.
>>>>
>>>> How about something like the below? It also adds the branch flags
>>>> Mikey wanted for PowerPC.
>>>
>>> Peter,
>>>
>>> BTW PowerPC also has the ability to filter on conditional branches.  Any
>>> chance we could add something like the follow to perf also?
>>>
>>> Mikey
>>>
>>> diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
>>> index fb104e5..891c769 100644
>>> --- a/include/uapi/linux/perf_event.h
>>> +++ b/include/uapi/linux/perf_event.h
>>> @@ -157,8 +157,9 @@ enum perf_branch_sample_type {
>>>         PERF_SAMPLE_BRANCH_ANY_CALL     = 1U << 4, /* any call branch */
>>>         PERF_SAMPLE_BRANCH_ANY_RETURN   = 1U << 5, /* any return branch */
>>>         PERF_SAMPLE_BRANCH_IND_CALL     = 1U << 6, /* indirect calls */
>>> +       PERF_SAMPLE_BRANCH_CONDITIONAL  = 1U << 7, /* conditional branches */
>>>
>> I would use PERF_SAMPLE_BRANCH_COND here.
>>
>>> -       PERF_SAMPLE_BRANCH_MAX          = 1U << 7, /* non-ABI */
>>> +       PERF_SAMPLE_BRANCH_MAX          = 1U << 8, /* non-ABI */
>>>  };
>>>
>>>  #define PERF_SAMPLE_BRANCH_PLM_ALL \
>>> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
>>> index cdf58ec..5b0b89d 100644
>>> --- a/tools/perf/builtin-record.c
>>> +++ b/tools/perf/builtin-record.c
>>> @@ -676,6 +676,7 @@ static const struct branch_mode branch_modes[] = {
>>>         BRANCH_OPT("any_call", PERF_SAMPLE_BRANCH_ANY_CALL),
>>>         BRANCH_OPT("any_ret", PERF_SAMPLE_BRANCH_ANY_RETURN),
>>>         BRANCH_OPT("ind_call", PERF_SAMPLE_BRANCH_IND_CALL),
>>> +       BRANCH_OPT("cnd", PERF_SAMPLE_BRANCH_CONDITIONAL),
>>
>> use "cond"
>>
>>>         BRANCH_END
>>>  };
>>>
>>
>> And if you do this, you also need to update the x86
>> perf_event_intel_lbr.c mapping
>> tables to fill out the entries for PERF_SAMPLE_BRANCH_COND:
>>
>>         [PERF_SAMPLE_BRANCH_COND]       = LBR_JCC,
>>
>> And you also need to update intel_pmu_setup_sw_lbr_filter()
>> to handle the conversion to x86 instructions:
>>
>>        if (br_type & PERF_SAMPLE_BRANCH_COND)
>>                 mask |= X86_BR_JCC;
>>
>>
>> You also need to update the perf-record.txt documentation to list cond
>> as a possible
>> branch filter.
>
> Hey Stephane,
>
> I have incorporated all the review comments into the patch series
> https://lkml.org/lkml/2013/5/22/51.
>
I don't see how you can compile Patch 3/5:

+ BRANCH_OPT("cond", PERF_SAMPLE_BRANCH_CONDITIONAL),

Needs to be PERF_SAMPLE_BRANCH_COND.

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL
  2013-05-22 12:23             ` Stephane Eranian
@ 2013-05-22 14:51               ` Anshuman Khandual
  0 siblings, 0 replies; 56+ messages in thread
From: Anshuman Khandual @ 2013-05-22 14:51 UTC (permalink / raw)
  To: Stephane Eranian
  Cc: Michael Neuling, ak, Peter Zijlstra, LKML, Linux PPC dev, Ingo Molnar

On 05/22/2013 05:53 PM, Stephane Eranian wrote:
> Hi,
> 
> 
> 
> On Wed, May 22, 2013 at 8:43 AM, Anshuman Khandual
> <khandual@linux.vnet.ibm.com> wrote:
>> On 05/21/2013 07:25 PM, Stephane Eranian wrote:
>>> On Thu, May 16, 2013 at 12:15 PM, Michael Neuling <mikey@neuling.org> wrote:
>>>> Peter Zijlstra <peterz@infradead.org> wrote:
>>>>
>>>>> On Wed, May 15, 2013 at 03:37:22PM +0200, Stephane Eranian wrote:
>>>>>> On Fri, May 3, 2013 at 2:11 PM, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
>>>>>>> We should always have proper privileges when requesting kernel data.
>>>>>>>
>>>>>>> Cc: Andi Kleen <ak@linux.intel.com>
>>>>>>> Cc: eranian@google.com
>>>>>>> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
>>>>>>> Link: http://lkml.kernel.org/n/tip-v0x9ky3ahzr6nm3c6ilwrili@git.kernel.org
>>>>>>> ---
>>>>>>>  arch/x86/kernel/cpu/perf_event_intel_lbr.c |    5 ++++-
>>>>>>>  1 file changed, 4 insertions(+), 1 deletion(-)
>>>>>>>
>>>>>>> --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
>>>>>>> +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
>>>>>>> @@ -318,8 +318,11 @@ static void intel_pmu_setup_sw_lbr_filte
>>>>>>>         if (br_type & PERF_SAMPLE_BRANCH_USER)
>>>>>>>                 mask |= X86_BR_USER;
>>>>>>>
>>>>>>> -       if (br_type & PERF_SAMPLE_BRANCH_KERNEL)
>>>>>>> +       if (br_type & PERF_SAMPLE_BRANCH_KERNEL) {
>>>>>>> +               if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
>>>>>>> +                       return -EACCES;
>>>>>>>                 mask |= X86_BR_KERNEL;
>>>>>>> +       }
>>>>>>>
>>>>>> This will prevent regular users from capturing kernel -> kernel branches.
>>>>>> But it won't prevent users from getting kernel -> user branches. Thus
>>>>>> some kernel address will still be captured. I guess they could be eliminated
>>>>>> by the sw_filter.
>>>>>>
>>>>>> When using LBR priv level filtering, the filter applies to the branch target
>>>>>> only.
>>>>>
>>>>> How about something like the below? It also adds the branch flags
>>>>> Mikey wanted for PowerPC.
>>>>
>>>> Peter,
>>>>
>>>> BTW PowerPC also has the ability to filter on conditional branches.  Any
>>>> chance we could add something like the follow to perf also?
>>>>
>>>> Mikey
>>>>
>>>> diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
>>>> index fb104e5..891c769 100644
>>>> --- a/include/uapi/linux/perf_event.h
>>>> +++ b/include/uapi/linux/perf_event.h
>>>> @@ -157,8 +157,9 @@ enum perf_branch_sample_type {
>>>>         PERF_SAMPLE_BRANCH_ANY_CALL     = 1U << 4, /* any call branch */
>>>>         PERF_SAMPLE_BRANCH_ANY_RETURN   = 1U << 5, /* any return branch */
>>>>         PERF_SAMPLE_BRANCH_IND_CALL     = 1U << 6, /* indirect calls */
>>>> +       PERF_SAMPLE_BRANCH_CONDITIONAL  = 1U << 7, /* conditional branches */
>>>>
>>> I would use PERF_SAMPLE_BRANCH_COND here.
>>>
>>>> -       PERF_SAMPLE_BRANCH_MAX          = 1U << 7, /* non-ABI */
>>>> +       PERF_SAMPLE_BRANCH_MAX          = 1U << 8, /* non-ABI */
>>>>  };
>>>>
>>>>  #define PERF_SAMPLE_BRANCH_PLM_ALL \
>>>> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
>>>> index cdf58ec..5b0b89d 100644
>>>> --- a/tools/perf/builtin-record.c
>>>> +++ b/tools/perf/builtin-record.c
>>>> @@ -676,6 +676,7 @@ static const struct branch_mode branch_modes[] = {
>>>>         BRANCH_OPT("any_call", PERF_SAMPLE_BRANCH_ANY_CALL),
>>>>         BRANCH_OPT("any_ret", PERF_SAMPLE_BRANCH_ANY_RETURN),
>>>>         BRANCH_OPT("ind_call", PERF_SAMPLE_BRANCH_IND_CALL),
>>>> +       BRANCH_OPT("cnd", PERF_SAMPLE_BRANCH_CONDITIONAL),
>>>
>>> use "cond"
>>>
>>>>         BRANCH_END
>>>>  };
>>>>
>>>
>>> And if you do this, you also need to update the x86
>>> perf_event_intel_lbr.c mapping
>>> tables to fill out the entries for PERF_SAMPLE_BRANCH_COND:
>>>
>>>         [PERF_SAMPLE_BRANCH_COND]       = LBR_JCC,
>>>
>>> And you also need to update intel_pmu_setup_sw_lbr_filter()
>>> to handle the conversion to x86 instructions:
>>>
>>>        if (br_type & PERF_SAMPLE_BRANCH_COND)
>>>                 mask |= X86_BR_JCC;
>>>
>>>
>>> You also need to update the perf-record.txt documentation to list cond
>>> as a possible
>>> branch filter.
>>
>> Hey Stephane,
>>
>> I have incorporated all the review comments into the patch series
>> https://lkml.org/lkml/2013/5/22/51.
>>
> I don't see how you can compile Patch 3/5:
> 
> + BRANCH_OPT("cond", PERF_SAMPLE_BRANCH_CONDITIONAL),
> 
> Needs to be PERF_SAMPLE_BRANCH_COND.
> 

Ahh, sorry missed it, will fix it.


^ permalink raw reply	[flat|nested] 56+ messages in thread

end of thread, other threads:[~2013-05-22 14:52 UTC | newest]

Thread overview: 56+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-05-03 12:11 [PATCH 0/3] Various perf patches Peter Zijlstra
2013-05-03 12:11 ` [PATCH 1/3] perf, x86: Blacklist all MEM_*_RETIRED events for IVB Peter Zijlstra
2013-05-03 14:35   ` Andi Kleen
2013-05-03 17:00     ` Peter Zijlstra
2013-05-15 14:20       ` Stephane Eranian
2013-05-15 16:51         ` Peter Zijlstra
2013-05-16 15:42           ` Stephane Eranian
2013-05-16 16:07             ` Andi Kleen
2013-05-16 16:26               ` Stephane Eranian
2013-05-04  8:20   ` [tip:perf/urgent] perf/x86: Blacklist all MEM_*_RETIRED events for Ivy Bridge tip-bot for Peter Zijlstra
2013-05-03 12:11 ` [PATCH 2/3] perf, x86, lbr: Fix LBR filter Peter Zijlstra
2013-05-03 14:34   ` Andi Kleen
2013-05-04  6:34     ` Ingo Molnar
2013-05-04  8:21   ` [tip:perf/urgent] perf/x86/intel/lbr: " tip-bot for Peter Zijlstra
2013-05-03 12:11 ` [PATCH 3/3] perf, x86, lbr: Demand proper privileges for PERF_SAMPLE_BRANCH_KERNEL Peter Zijlstra
2013-05-03 14:41   ` Andi Kleen
2013-05-04  8:22   ` [tip:perf/urgent] perf/x86/intel/lbr: " tip-bot for Peter Zijlstra
2013-05-04 11:19     ` Borislav Petkov
2013-05-05  9:05       ` Ingo Molnar
2013-05-06  8:07       ` Peter Zijlstra
2013-05-06  9:42         ` Ingo Molnar
2013-05-15 13:37   ` [PATCH 3/3] perf, x86, lbr: " Stephane Eranian
2013-05-15 14:30     ` Peter Zijlstra
2013-05-16  9:09     ` Peter Zijlstra
2013-05-16  9:17       ` Peter Zijlstra
2013-05-16 10:09       ` Michael Neuling
2013-05-16 10:09         ` Michael Neuling
2013-05-16 10:15       ` Michael Neuling
2013-05-16 10:15         ` Michael Neuling
2013-05-16 11:16         ` Peter Zijlstra
2013-05-16 11:16           ` Peter Zijlstra
2013-05-16 15:36           ` Stephane Eranian
2013-05-16 15:36             ` Stephane Eranian
2013-05-17 11:12             ` Peter Zijlstra
2013-05-17 11:12               ` Peter Zijlstra
2013-05-17 11:32               ` Michael Neuling
2013-05-17 11:32                 ` Michael Neuling
2013-05-17 11:39                 ` Peter Zijlstra
2013-05-17 11:39                   ` Peter Zijlstra
2013-05-17 21:39                   ` Stephane Eranian
2013-05-17 21:39                     ` Stephane Eranian
2013-05-17 22:14                     ` Michael Neuling
2013-05-17 22:14                       ` Michael Neuling
2013-05-17 22:59                       ` Stephane Eranian
2013-05-17 22:59                         ` Stephane Eranian
2013-05-21  5:41               ` Michael Neuling
2013-05-21  5:41                 ` Michael Neuling
2013-05-21  8:50                 ` Peter Zijlstra
2013-05-21  8:50                   ` Peter Zijlstra
2013-05-21 13:46                   ` Stephane Eranian
2013-05-21 13:46                     ` Stephane Eranian
2013-05-21 13:55         ` Stephane Eranian
2013-05-21 13:55           ` Stephane Eranian
2013-05-22  6:43           ` Anshuman Khandual
2013-05-22 12:23             ` Stephane Eranian
2013-05-22 14:51               ` Anshuman Khandual

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.