All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/3] x86 msr: msr goto extension support
@ 2014-07-31  9:41 kan.liang
  2014-07-31  9:41 ` [PATCH 2/3] x86 perf: Protect LBR msrs accessing against potential #GP kan.liang
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: kan.liang @ 2014-07-31  9:41 UTC (permalink / raw)
  To: peterz; +Cc: andi, alexander.shishkin, linux-kernel, Kan Liang

From: Kan Liang <kan.liang@intel.com>

Currently, {rd,wr}msrl_safe can handle the exception which caused by accessing
specific MSR.
However, it will introduce extra conditional branch for testing errors. That
will impact the "fast" path's performance.
The newly implemented {rd,wr}msrl_goto function can not only handle the
exception which caused by accessing specific MSR,
but also takes advantage of the asm goto extension to eliminate the impact of
performance.

The asm goto extension is supported by GCC 4.5 and later versions. If the
compiler doesn't support goto extension, _safe will be used to replace _goto.

Signed-off-by: Kan Liang <kan.liang@intel.com>
---
 arch/x86/include/asm/msr.h      | 60 +++++++++++++++++++++++++++++++++++++++++
 arch/x86/include/asm/paravirt.h | 18 +++++++++++++
 2 files changed, 78 insertions(+)

diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index de36f22..55438da 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -203,6 +203,66 @@ do {                                                            \
 
 #define rdtscpll(val, aux) (val) = native_read_tscp(&(aux))
 
+#ifdef CC_HAVE_ASM_GOTO
+
+/*
+ * The _goto version is rdmsrl/wrmsrl with exception handling
+ * The advantage (than _safe) is that it can directly jump in the
+ * exception handling code, and never test in the "fast" path.
+ *
+ * Since _goto doesn't support output, try to protect the output
+ * registers by clobbers, and process the registers immediately.
+ */
+#define rdmsrl_goto(msr, result, fail_label)			\
+do {								\
+	DECLARE_ARGS(val, low, high);				\
+	asm_volatile_goto("2: rdmsr\n"				\
+			"1:\n\t"				\
+			_ASM_EXTABLE(2b, %l[fail_label])	\
+			: /* No outputs. */			\
+			: "c" (msr)				\
+			: "%rax", "%rdx"			\
+			: fail_label);				\
+	asm volatile (""					\
+			: EAX_EDX_RET(val, low, high)		\
+			: );					\
+	result = EAX_EDX_VAL(val, low, high);			\
+} while (0)
+
+#define wrmsrl_goto(msr, val, fail_label)			\
+do {								\
+	unsigned low, high;					\
+	low = (u32)val;						\
+	high = (u32)(val >> 32);				\
+	asm_volatile_goto("2: wrmsr\n"				\
+			"1:\n\t"				\
+			_ASM_EXTABLE(2b, %l[fail_label])	\
+			: /* No outputs. */			\
+			: "c" (msr), "a" (low), "d" (high)	\
+			: "memory"				\
+			: fail_label);				\
+} while (0)
+
+#else /* CC_HAVE_ASM_GOTO */
+
+/*
+ * If compiler doesn't support asm goto, use _safe to replace
+ */
+#define rdmsrl_goto(msr, result, fail_label)			\
+do {								\
+	if (rdmsrl_safe(msr, &result))				\
+		goto fail_label;				\
+} while (0)
+
+#define wrmsrl_goto(msr, result, fail_label)			\
+do {								\
+	if (wrmsr_safe((msr), (u32)(result),			\
+				(u32)((result) >> 32)))		\
+		goto fail_label;				\
+} while (0)
+
+#endif  /* CC_HAVE_ASM_GOTO */
+
 #endif	/* !CONFIG_PARAVIRT */
 
 #define wrmsrl_safe(msr, val) wrmsr_safe((msr), (u32)(val),		\
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index cd6e161..1fa18a1 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -174,6 +174,24 @@ static inline int rdmsrl_safe(unsigned msr, unsigned long long *p)
 	return err;
 }
 
+/*
+ * TODO: paravirt _goto doesn't support yet.
+ * But the implementation as below doesn't impact the current performance,
+ * since rdmsrl/wrmsrl eventually call _safe version in paravirt now.
+ */
+#define rdmsrl_goto(msr, result, fail_label)			\
+do {								\
+	if (rdmsrl_safe(msr, &result))				\
+		goto fail_label;				\
+} while (0)
+
+#define wrmsrl_goto(msr, result, fail_label)			\
+do {								\
+	if (wrmsr_safe((msr), (u32)(result),			\
+				(u32)((result) >> 32)))		\
+		goto fail_label;				\
+} while (0)
+
 static inline u64 paravirt_read_tsc(void)
 {
 	return PVOP_CALL0(u64, pv_cpu_ops.read_tsc);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/3] x86 perf: Protect LBR msrs accessing against potential #GP
  2014-07-31  9:41 [PATCH 1/3] x86 msr: msr goto extension support kan.liang
@ 2014-07-31  9:41 ` kan.liang
  2014-08-01  7:38   ` Peter Zijlstra
  2014-07-31  9:41 ` [PATCH 3/3] x86 perf: Protect LBR and BTS enabling kan.liang
  2014-07-31 21:05 ` [PATCH 1/3] x86 msr: msr goto extension support Andy Lutomirski
  2 siblings, 1 reply; 10+ messages in thread
From: kan.liang @ 2014-07-31  9:41 UTC (permalink / raw)
  To: peterz; +Cc: andi, alexander.shishkin, linux-kernel, Kan Liang

From: Kan Liang <kan.liang@intel.com>

Intel PT will take over LBR hardware. If RTIT_CTL.TraceEn=1, any attempt to
read or write the LBR or LER MSRs, including LBR_TOS, will result in a #GP.
Intel PT can be enabled/disabled at runtime by hardware/BIOS, so it's better
LBR MSRs can be protected at runtime.

The {rd,wr}msrl_goto can protect LBR accessing against the potential #GP.
Furthermore, it will not impact the "fast" path's performance.

Signed-off-by: Kan Liang <kan.liang@intel.com>
---
 arch/x86/kernel/cpu/perf_event_intel_lbr.c | 35 ++++++++++++++++++++++++------
 1 file changed, 28 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel_lbr.c b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
index 9dd2459..ec82e0e 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
@@ -157,7 +157,11 @@ static void intel_pmu_lbr_reset_32(void)
 	int i;
 
 	for (i = 0; i < x86_pmu.lbr_nr; i++)
-		wrmsrl(x86_pmu.lbr_from + i, 0);
+		wrmsrl_goto(x86_pmu.lbr_from + i, 0ULL, wrmsr_fail);
+	return;
+
+wrmsr_fail:
+	; /* TODO: error path, but do nothing currently. */
 }
 
 static void intel_pmu_lbr_reset_64(void)
@@ -165,9 +169,13 @@ static void intel_pmu_lbr_reset_64(void)
 	int i;
 
 	for (i = 0; i < x86_pmu.lbr_nr; i++) {
-		wrmsrl(x86_pmu.lbr_from + i, 0);
-		wrmsrl(x86_pmu.lbr_to   + i, 0);
+		wrmsrl_goto(x86_pmu.lbr_from + i, 0ULL, wrmsr_fail);
+		wrmsrl_goto(x86_pmu.lbr_to   + i, 0ULL, wrmsr_fail);
 	}
+	return;
+
+wrmsr_fail:
+	; /* TODO: error path, but do nothing currently. */
 }
 
 void intel_pmu_lbr_reset(void)
@@ -241,9 +249,13 @@ static inline u64 intel_pmu_lbr_tos(void)
 {
 	u64 tos;
 
-	rdmsrl(x86_pmu.lbr_tos, tos);
+	rdmsrl_goto(x86_pmu.lbr_tos, tos, rdmsr_fail);
 
 	return tos;
+
+rdmsr_fail:
+	/* TODO: error path, but do nothing currently. */
+	return 0;
 }
 
 static void intel_pmu_lbr_read_32(struct cpu_hw_events *cpuc)
@@ -262,7 +274,8 @@ static void intel_pmu_lbr_read_32(struct cpu_hw_events *cpuc)
 			u64     lbr;
 		} msr_lastbranch;
 
-		rdmsrl(x86_pmu.lbr_from + lbr_idx, msr_lastbranch.lbr);
+		rdmsrl_goto(x86_pmu.lbr_from + lbr_idx,
+				msr_lastbranch.lbr, rdmsr_fail);
 
 		cpuc->lbr_entries[i].from	= msr_lastbranch.from;
 		cpuc->lbr_entries[i].to		= msr_lastbranch.to;
@@ -271,6 +284,10 @@ static void intel_pmu_lbr_read_32(struct cpu_hw_events *cpuc)
 		cpuc->lbr_entries[i].reserved	= 0;
 	}
 	cpuc->lbr_stack.nr = i;
+	return;
+
+rdmsr_fail:
+	; /* TODO: error path, but do nothing currently. */
 }
 
 /*
@@ -292,8 +309,8 @@ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc)
 		int skip = 0;
 		int lbr_flags = lbr_desc[lbr_format];
 
-		rdmsrl(x86_pmu.lbr_from + lbr_idx, from);
-		rdmsrl(x86_pmu.lbr_to   + lbr_idx, to);
+		rdmsrl_goto(x86_pmu.lbr_from + lbr_idx, from, rdmsr_fail);
+		rdmsrl_goto(x86_pmu.lbr_to   + lbr_idx, to, rdmsr_fail);
 
 		if (lbr_flags & LBR_EIP_FLAGS) {
 			mis = !!(from & LBR_FROM_FLAG_MISPRED);
@@ -328,6 +345,10 @@ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc)
 		out++;
 	}
 	cpuc->lbr_stack.nr = out;
+	return;
+
+rdmsr_fail:
+	; /* TODO: error path, but do nothing currently. */
 }
 
 void intel_pmu_lbr_read(void)
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/3] x86 perf: Protect LBR and BTS enabling
  2014-07-31  9:41 [PATCH 1/3] x86 msr: msr goto extension support kan.liang
  2014-07-31  9:41 ` [PATCH 2/3] x86 perf: Protect LBR msrs accessing against potential #GP kan.liang
@ 2014-07-31  9:41 ` kan.liang
  2014-07-31 21:05 ` [PATCH 1/3] x86 msr: msr goto extension support Andy Lutomirski
  2 siblings, 0 replies; 10+ messages in thread
From: kan.liang @ 2014-07-31  9:41 UTC (permalink / raw)
  To: peterz; +Cc: andi, alexander.shishkin, linux-kernel, Kan Liang

From: Kan Liang <kan.liang@intel.com>

If packet generation is enabled (IA32_RTIT_CTL.TraceEn = 1), any attempt to
enable LBRs, LERs, BTS, or
BTM (setting IA32_DEBUG_CTL.LBR =1 or IA32_DEBUG_CTL.TR = 1) will cause a
general-protection fault (#GP).

{rd,wr}msrl_safe can handle the exception which caused by enabling LBR/BTS

Signed-off-by: Kan Liang <kan.liang@intel.com>
---
 arch/x86/kernel/cpu/perf_event_intel_ds.c  | 4 ++--
 arch/x86/kernel/cpu/perf_event_intel_lbr.c | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel_ds.c b/arch/x86/kernel/cpu/perf_event_intel_ds.c
index 980970c..c2519b0 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_ds.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c
@@ -468,7 +468,7 @@ void intel_pmu_enable_bts(u64 config)
 	if (!(config & ARCH_PERFMON_EVENTSEL_USR))
 		debugctlmsr |= DEBUGCTLMSR_BTS_OFF_USR;
 
-	update_debugctlmsr(debugctlmsr);
+	wrmsrl_safe(MSR_IA32_DEBUGCTLMSR, debugctlmsr);
 }
 
 void intel_pmu_disable_bts(void)
@@ -485,7 +485,7 @@ void intel_pmu_disable_bts(void)
 		~(DEBUGCTLMSR_TR | DEBUGCTLMSR_BTS | DEBUGCTLMSR_BTINT |
 		  DEBUGCTLMSR_BTS_OFF_OS | DEBUGCTLMSR_BTS_OFF_USR);
 
-	update_debugctlmsr(debugctlmsr);
+	wrmsrl_safe(MSR_IA32_DEBUGCTLMSR, debugctlmsr);
 }
 
 int intel_pmu_drain_bts_buffer(void)
diff --git a/arch/x86/kernel/cpu/perf_event_intel_lbr.c b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
index ec82e0e..688ac83 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
@@ -140,7 +140,7 @@ static void __intel_pmu_lbr_enable(void)
 
 	rdmsrl(MSR_IA32_DEBUGCTLMSR, debugctl);
 	debugctl |= (DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI);
-	wrmsrl(MSR_IA32_DEBUGCTLMSR, debugctl);
+	wrmsrl_safe(MSR_IA32_DEBUGCTLMSR, debugctl);
 }
 
 static void __intel_pmu_lbr_disable(void)
@@ -149,7 +149,7 @@ static void __intel_pmu_lbr_disable(void)
 
 	rdmsrl(MSR_IA32_DEBUGCTLMSR, debugctl);
 	debugctl &= ~(DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI);
-	wrmsrl(MSR_IA32_DEBUGCTLMSR, debugctl);
+	wrmsrl_safe(MSR_IA32_DEBUGCTLMSR, debugctl);
 }
 
 static void intel_pmu_lbr_reset_32(void)
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/3] x86 msr: msr goto extension support
  2014-07-31  9:41 [PATCH 1/3] x86 msr: msr goto extension support kan.liang
  2014-07-31  9:41 ` [PATCH 2/3] x86 perf: Protect LBR msrs accessing against potential #GP kan.liang
  2014-07-31  9:41 ` [PATCH 3/3] x86 perf: Protect LBR and BTS enabling kan.liang
@ 2014-07-31 21:05 ` Andy Lutomirski
  2014-08-01  8:05   ` Peter Zijlstra
  2 siblings, 1 reply; 10+ messages in thread
From: Andy Lutomirski @ 2014-07-31 21:05 UTC (permalink / raw)
  To: kan.liang, peterz; +Cc: andi, alexander.shishkin, linux-kernel

On 07/31/2014 02:41 AM, kan.liang@intel.com wrote:
> From: Kan Liang <kan.liang@intel.com>
> 
> Currently, {rd,wr}msrl_safe can handle the exception which caused by accessing
> specific MSR.
> However, it will introduce extra conditional branch for testing errors. That
> will impact the "fast" path's performance.
> The newly implemented {rd,wr}msrl_goto function can not only handle the
> exception which caused by accessing specific MSR,
> but also takes advantage of the asm goto extension to eliminate the impact of
> performance.
> 
> The asm goto extension is supported by GCC 4.5 and later versions. If the
> compiler doesn't support goto extension, _safe will be used to replace _goto.
> 
> Signed-off-by: Kan Liang <kan.liang@intel.com>
> ---
>  arch/x86/include/asm/msr.h      | 60 +++++++++++++++++++++++++++++++++++++++++
>  arch/x86/include/asm/paravirt.h | 18 +++++++++++++
>  2 files changed, 78 insertions(+)
> 
> diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
> index de36f22..55438da 100644
> --- a/arch/x86/include/asm/msr.h
> +++ b/arch/x86/include/asm/msr.h
> @@ -203,6 +203,66 @@ do {                                                            \
>  
>  #define rdtscpll(val, aux) (val) = native_read_tscp(&(aux))
>  
> +#ifdef CC_HAVE_ASM_GOTO
> +
> +/*
> + * The _goto version is rdmsrl/wrmsrl with exception handling
> + * The advantage (than _safe) is that it can directly jump in the
> + * exception handling code, and never test in the "fast" path.
> + *
> + * Since _goto doesn't support output, try to protect the output
> + * registers by clobbers, and process the registers immediately.
> + */
> +#define rdmsrl_goto(msr, result, fail_label)			\
> +do {								\
> +	DECLARE_ARGS(val, low, high);				\
> +	asm_volatile_goto("2: rdmsr\n"				\
> +			"1:\n\t"				\
> +			_ASM_EXTABLE(2b, %l[fail_label])	\
> +			: /* No outputs. */			\
> +			: "c" (msr)				\
> +			: "%rax", "%rdx"			\
> +			: fail_label);				\
> +	asm volatile (""					\
> +			: EAX_EDX_RET(val, low, high)		\
> +			: );					\

This is scary -- the compiler is free to optimize this incorrectly, and
it doesn't even seem very farfetched to me.

> +	result = EAX_EDX_VAL(val, low, high);			\
> +} while (0)
> +
> +#define wrmsrl_goto(msr, val, fail_label)			\
> +do {								\
> +	unsigned low, high;					\
> +	low = (u32)val;						\
> +	high = (u32)(val >> 32);				\
> +	asm_volatile_goto("2: wrmsr\n"				\
> +			"1:\n\t"				\
> +			_ASM_EXTABLE(2b, %l[fail_label])	\
> +			: /* No outputs. */			\
> +			: "c" (msr), "a" (low), "d" (high)	\
> +			: "memory"				\
> +			: fail_label);				\
> +} while (0)

I like this one.

--Andy

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/3] x86 perf: Protect LBR msrs accessing against potential #GP
  2014-07-31  9:41 ` [PATCH 2/3] x86 perf: Protect LBR msrs accessing against potential #GP kan.liang
@ 2014-08-01  7:38   ` Peter Zijlstra
  2014-08-01  7:44     ` Peter Zijlstra
  0 siblings, 1 reply; 10+ messages in thread
From: Peter Zijlstra @ 2014-08-01  7:38 UTC (permalink / raw)
  To: kan.liang; +Cc: andi, alexander.shishkin, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1073 bytes --]

On Thu, Jul 31, 2014 at 02:41:02AM -0700, kan.liang@intel.com wrote:
> From: Kan Liang <kan.liang@intel.com>
> 
> Intel PT will take over LBR hardware. If RTIT_CTL.TraceEn=1, any attempt to
> read or write the LBR or LER MSRs, including LBR_TOS, will result in a #GP.
> Intel PT can be enabled/disabled at runtime by hardware/BIOS, so it's better
> LBR MSRs can be protected at runtime.
> 
> The {rd,wr}msrl_goto can protect LBR accessing against the potential #GP.
> Furthermore, it will not impact the "fast" path's performance.

NAK!

I already said this isn't going to ever happen.

Both PT and LBR are arbitrated through the kernel, therefore we can (and
must) deny PT when there's existing LBR usage and vice versa.

We will not hijack resources like this full stop end of story.

Fuck hardware/BIOS, they should _NOT_ be touching this.

The 3 people in the world with access to an x86 hardware debugger had
better be competent and know WTF they're doing and the BIOS can just
piss off right now, they should not be touching this _EVER_.



[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/3] x86 perf: Protect LBR msrs accessing against potential #GP
  2014-08-01  7:38   ` Peter Zijlstra
@ 2014-08-01  7:44     ` Peter Zijlstra
  2014-08-01 13:21       ` Andi Kleen
  0 siblings, 1 reply; 10+ messages in thread
From: Peter Zijlstra @ 2014-08-01  7:44 UTC (permalink / raw)
  To: kan.liang
  Cc: andi, alexander.shishkin, linux-kernel, Ingo Molnar, Thomas Gleixner

[-- Attachment #1: Type: text/plain, Size: 1373 bytes --]

On Fri, Aug 01, 2014 at 09:38:25AM +0200, Peter Zijlstra wrote:
> On Thu, Jul 31, 2014 at 02:41:02AM -0700, kan.liang@intel.com wrote:
> > From: Kan Liang <kan.liang@intel.com>
> > 
> > Intel PT will take over LBR hardware. If RTIT_CTL.TraceEn=1, any attempt to
> > read or write the LBR or LER MSRs, including LBR_TOS, will result in a #GP.
> > Intel PT can be enabled/disabled at runtime by hardware/BIOS, so it's better
> > LBR MSRs can be protected at runtime.
> > 
> > The {rd,wr}msrl_goto can protect LBR accessing against the potential #GP.
> > Furthermore, it will not impact the "fast" path's performance.
> 
> NAK!
> 
> I already said this isn't going to ever happen.
> 
> Both PT and LBR are arbitrated through the kernel, therefore we can (and
> must) deny PT when there's existing LBR usage and vice versa.
> 
> We will not hijack resources like this full stop end of story.
> 
> Fuck hardware/BIOS, they should _NOT_ be touching this.
> 
> The 3 people in the world with access to an x86 hardware debugger had
> better be competent and know WTF they're doing and the BIOS can just
> piss off right now, they should not be touching this _EVER_.

I yelled at BIOS engineers over their PMU usage and $vendor added a BIOS
knob to disable that, I'll yell at BIOS engineers again, just give me
their number.

Really, say NO already.

[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/3] x86 msr: msr goto extension support
  2014-07-31 21:05 ` [PATCH 1/3] x86 msr: msr goto extension support Andy Lutomirski
@ 2014-08-01  8:05   ` Peter Zijlstra
  0 siblings, 0 replies; 10+ messages in thread
From: Peter Zijlstra @ 2014-08-01  8:05 UTC (permalink / raw)
  To: Andy Lutomirski; +Cc: kan.liang, andi, alexander.shishkin, linux-kernel, jakub

[-- Attachment #1: Type: text/plain, Size: 1091 bytes --]

On Thu, Jul 31, 2014 at 02:05:48PM -0700, Andy Lutomirski wrote:
> On 07/31/2014 02:41 AM, kan.liang@intel.com wrote:
> > +/*
> > + * The _goto version is rdmsrl/wrmsrl with exception handling
> > + * The advantage (than _safe) is that it can directly jump in the
> > + * exception handling code, and never test in the "fast" path.
> > + *
> > + * Since _goto doesn't support output, try to protect the output
> > + * registers by clobbers, and process the registers immediately.
> > + */
> > +#define rdmsrl_goto(msr, result, fail_label)			\
> > +do {								\
> > +	DECLARE_ARGS(val, low, high);				\
> > +	asm_volatile_goto("2: rdmsr\n"				\
> > +			"1:\n\t"				\
> > +			_ASM_EXTABLE(2b, %l[fail_label])	\
> > +			: /* No outputs. */			\
> > +			: "c" (msr)				\
> > +			: "%rax", "%rdx"			\
> > +			: fail_label);				\
> > +	asm volatile (""					\
> > +			: EAX_EDX_RET(val, low, high)		\
> > +			: );					\
> 
> This is scary -- the compiler is free to optimize this incorrectly, and
> it doesn't even seem very farfetched to me.

Quite so, lets add Jakub.

[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/3] x86 perf: Protect LBR msrs accessing against potential #GP
  2014-08-01  7:44     ` Peter Zijlstra
@ 2014-08-01 13:21       ` Andi Kleen
  2014-08-01 18:50         ` Peter Zijlstra
  0 siblings, 1 reply; 10+ messages in thread
From: Andi Kleen @ 2014-08-01 13:21 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: kan.liang, andi, alexander.shishkin, linux-kernel, Ingo Molnar,
	Thomas Gleixner

> > NAK!
> > 
> > I already said this isn't going to ever happen.
> > 
> > Both PT and LBR are arbitrated through the kernel, therefore we can (and
> > must) deny PT when there's existing LBR usage and vice versa.
> > 
> > We will not hijack resources like this full stop end of story.
> > 
> > Fuck hardware/BIOS, they should _NOT_ be touching this.
> > 
> > The 3 people in the world with access to an x86 hardware debugger had
> > better be competent and know WTF they're doing and the BIOS can just
> > piss off right now, they should not be touching this _EVER_.


So you realize that hardware level debugging becomes impossible
if everyone starts using perf?

I dont know how anyone can construct essentially sabotaging powerful
debugging techniques ever as a good thing. In the end it'll just
lead to more buggy software (and likely hardware) for everyone.


> 
> I yelled at BIOS engineers over their PMU usage and $vendor added a BIOS
> knob to disable that, I'll yell at BIOS engineers again, just give me
> their number.
> 
> Really, say NO already.

Ok, so no technical reason, merely a "me vs you" power play.

Thank you for not stating that earlier and wasting everyone's time.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/3] x86 perf: Protect LBR msrs accessing against potential #GP
  2014-08-01 13:21       ` Andi Kleen
@ 2014-08-01 18:50         ` Peter Zijlstra
  2014-08-02  5:27           ` Ingo Molnar
  0 siblings, 1 reply; 10+ messages in thread
From: Peter Zijlstra @ 2014-08-01 18:50 UTC (permalink / raw)
  To: Andi Kleen
  Cc: kan.liang, alexander.shishkin, linux-kernel, Ingo Molnar,
	Thomas Gleixner

[-- Attachment #1: Type: text/plain, Size: 2600 bytes --]

On Fri, Aug 01, 2014 at 03:21:20PM +0200, Andi Kleen wrote:
> > > NAK!
> > > 
> > > I already said this isn't going to ever happen.
> > > 
> > > Both PT and LBR are arbitrated through the kernel, therefore we can (and
> > > must) deny PT when there's existing LBR usage and vice versa.
> > > 
> > > We will not hijack resources like this full stop end of story.
> > > 
> > > Fuck hardware/BIOS, they should _NOT_ be touching this.
> > > 
> > > The 3 people in the world with access to an x86 hardware debugger had
> > > better be competent and know WTF they're doing and the BIOS can just
> > > piss off right now, they should not be touching this _EVER_.
> 
> 
> So you realize that hardware level debugging becomes impossible
> if everyone starts using perf?

That only becomes a problem if you need to hardware debug something that
uses LBR/BTS (in the current state or things).

Now, by doing the patch you did, you already change behaviour by having
PT enabled, and therefore you already couldn't.

So this is an explicit blind spot for the hardware debugger, that seems
like a bad idea anyhow.

The things is, with or without this patch you cannot debug LBR using
software.

> I dont know how anyone can construct essentially sabotaging powerful
> debugging techniques ever as a good thing. In the end it'll just
> lead to more buggy software (and likely hardware) for everyone.

How about you see it as an attempt to improve things by not having
arbitrary and limiting blind spots like that?

A hardware debugger should be able to debug everything, including LBR
using software.

> > I yelled at BIOS engineers over their PMU usage and $vendor added a BIOS
> > knob to disable that, I'll yell at BIOS engineers again, just give me
> > their number.
> > 
> > Really, say NO already.
> 
> Ok, so no technical reason, merely a "me vs you" power play.

Clearly you forgot the discussion we had back then, you want me to find
you a link or do you think you can Google it yourself? I'm not the only
one who thinks its a terrible idea for the BIOS to involve itself in
these things.

> Thank you for not stating that earlier and wasting everyone's time.

Oh, so you thing its entirely reasonable for a BIOS to use PMU features
then? I suppose you do since you wrote that silly document back then
about 'sharing' things.

We should be limiting the things the BIOS does, because the BIOS will
get it wrong and then we have to spend ages working around it. There's
enough BIOS wreckage to deal with, don't encourage them to make more.



[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/3] x86 perf: Protect LBR msrs accessing against potential #GP
  2014-08-01 18:50         ` Peter Zijlstra
@ 2014-08-02  5:27           ` Ingo Molnar
  0 siblings, 0 replies; 10+ messages in thread
From: Ingo Molnar @ 2014-08-02  5:27 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Andi Kleen, kan.liang, alexander.shishkin, linux-kernel,
	Thomas Gleixner, Linus Torvalds


* Peter Zijlstra <peterz@infradead.org> wrote:

> > > I yelled at BIOS engineers over their PMU usage and $vendor 
> > > added a BIOS knob to disable that, I'll yell at BIOS engineers 
> > > again, just give me their number.
> > > 
> > > Really, say NO already.
> > 
> > Ok, so no technical reason, merely a "me vs you" power play.

Andi, stop the silly arguments already and stop launching personal 
attacks against maintainers who are simply doing their job!

> Clearly you forgot the discussion we had back then, you want me to 
> find you a link or do you think you can Google it yourself? I'm not 
> the only one who thinks its a terrible idea for the BIOS to involve 
> itself in these things.

The BIOS has no business stealing or meddling with limited resources 
used by and managed by the kernel, such as PMU state. In addition to 
the loss of utility to users when the BIOS interferes, PMUs are buggy 
and fragile enough even without any undue BIOS interference.

This was escallated up to Linus last time around and he agreed that 
such BIOS meddling with kernel state is not acceptable, so you'll 
probably have to convince him as well in addition to having to 
convince every perf and x86 maintainer.

	Ingo

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2014-08-02  5:27 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-31  9:41 [PATCH 1/3] x86 msr: msr goto extension support kan.liang
2014-07-31  9:41 ` [PATCH 2/3] x86 perf: Protect LBR msrs accessing against potential #GP kan.liang
2014-08-01  7:38   ` Peter Zijlstra
2014-08-01  7:44     ` Peter Zijlstra
2014-08-01 13:21       ` Andi Kleen
2014-08-01 18:50         ` Peter Zijlstra
2014-08-02  5:27           ` Ingo Molnar
2014-07-31  9:41 ` [PATCH 3/3] x86 perf: Protect LBR and BTS enabling kan.liang
2014-07-31 21:05 ` [PATCH 1/3] x86 msr: msr goto extension support Andy Lutomirski
2014-08-01  8:05   ` Peter Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.