All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V3 0/2] measure SMI cost (kernel)
@ 2017-03-27 18:30 kan.liang
  2017-03-27 18:30 ` [PATCH V3 1/2] x86/msr: add msr_set/clear_bit_on_cpu/cpus access functions kan.liang
  2017-03-27 18:30 ` [PATCH V3 2/2] perf/x86: add sysfs entry to freeze counter on SMI kan.liang
  0 siblings, 2 replies; 10+ messages in thread
From: kan.liang @ 2017-03-27 18:30 UTC (permalink / raw)
  To: peterz, tglx, mingo, linux-kernel; +Cc: bp, acme, eranian, jolsa, ak, Kan Liang

From: Kan Liang <Kan.liang@intel.com>

Currently, there is no way to measure the time cost in System management
mode (SMM) by perf.

Intel perfmon supports FREEZE_WHILE_SMM bit in IA32_DEBUGCTL. Once it sets,
the PMU core counters will freeze on SMI handler. But it will not have an
effect on free running counters. E.g. APERF counter.
The cost of SMI can be measured by (aperf - cycles).

A new sysfs entry /sys/device/cpu/freeze_on_smi is introduced to set
FREEZE_WHILE_SMM bit in IA32_DEBUGCTL.

A new --smi-cost mode in perf stat is implemented to measure the SMI cost
by calculating cycles and aperf results. In practice, the percentages of
SMI cycles should be more useful than absolute value. So the output will be
the percentage of SMI cycles and SMI#.
If user wants to get the actual cycles, they can apply --no-metric-only.

Here is an example output.

 Performance counter stats for 'sudo echo ':

SMI cycles%          SMI#
    0.1%              1

       0.010858678 seconds time elapsed

Changes since V1:
 - Only include kernel patch
 - New functions to set msr bit on cpu and cpus.
   Using the new functions to replace rdmsrl_on_cpu and wrmsrl_on_cpu.
   That avoids the extra IPIs and atomic issue.
 - Support hotplug

Changes since V2:
 - reuse msr_info

Kan Liang (2):
  x86/msr: add msr_set/clear_bit_on_cpu/cpus access functions
  perf/x86: add sysfs entry to freeze counter on SMI

 arch/x86/events/core.c           | 10 ++++++
 arch/x86/events/intel/core.c     | 48 +++++++++++++++++++++++++
 arch/x86/events/perf_event.h     |  3 ++
 arch/x86/include/asm/msr-index.h |  2 ++
 arch/x86/include/asm/msr.h       | 25 +++++++++++++
 arch/x86/lib/msr-smp.c           | 76 ++++++++++++++++++++++++++++++++++++++++
 6 files changed, 164 insertions(+)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH V3 1/2] x86/msr: add msr_set/clear_bit_on_cpu/cpus access functions
  2017-03-27 18:30 [PATCH V3 0/2] measure SMI cost (kernel) kan.liang
@ 2017-03-27 18:30 ` kan.liang
  2017-03-28  8:36   ` Thomas Gleixner
  2017-03-27 18:30 ` [PATCH V3 2/2] perf/x86: add sysfs entry to freeze counter on SMI kan.liang
  1 sibling, 1 reply; 10+ messages in thread
From: kan.liang @ 2017-03-27 18:30 UTC (permalink / raw)
  To: peterz, tglx, mingo, linux-kernel; +Cc: bp, acme, eranian, jolsa, ak, Kan Liang

From: Kan Liang <Kan.liang@intel.com>

To flip a MSR bit on many CPUs or specific CPU, currently it has to do
read-modify-write operation on the MSR through rd/wrmsr_on_cpu(s).
It actually sends two IPIs to the given CPU.

It is necessory to extend the single operation - msr_set/clear_bit - on
many CPUs or given CPU. It only sends one IPI to the given CPU, and
simplify MSR content manipulation.
The new functions wrap the smp_call_function* boilerplate code.

Signed-off-by: Kan Liang <Kan.liang@intel.com>
---
 arch/x86/include/asm/msr.h | 25 +++++++++++++++
 arch/x86/lib/msr-smp.c     | 76 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 101 insertions(+)

diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index 898dba2..bfd83ab 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -25,6 +25,7 @@ struct msr_info {
 	struct msr reg;
 	struct msr *msrs;
 	int err;
+	u8 bit;
 };
 
 struct msr_regs_info {
@@ -314,6 +315,10 @@ int msr_set_bit(u32 msr, u8 bit);
 int msr_clear_bit(u32 msr, u8 bit);
 
 #ifdef CONFIG_SMP
+int msr_set_bit_on_cpu(unsigned int cpu, u32 msr, u8 bit);
+int msr_clear_bit_on_cpu(unsigned int cpu, u32 msr, u8 bit);
+void msr_set_bit_on_cpus(const struct cpumask *mask, u32 msr, u8 bit);
+void msr_clear_bit_on_cpus(const struct cpumask *mask, u32 msr, u8 bit);
 int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
 int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
 int rdmsrl_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
@@ -327,6 +332,26 @@ int wrmsrl_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
 int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
 int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
 #else  /*  CONFIG_SMP  */
+static inline int msr_set_bit_on_cpu(unsigned int cpu, u32 msr, u8 bit)
+{
+	return msr_set_bit(msr, bit);
+}
+
+static inline int msr_clear_bit_on_cpu(unsigned int cpu, u32 msr, u8 bit)
+{
+	return msr_clear_bit(msr, bit);
+}
+
+static inline void msr_set_bit_on_cpus(const struct cpumask *mask, u32 msr, u8 bit)
+{
+	msr_set_bit(msr, bit);
+}
+
+static inline void msr_clear_bit_on_cpus(const struct cpumask *mask, u32 msr, u8 bit)
+{
+	msr_clear_bit(msr, bit);
+}
+
 static inline int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
 {
 	rdmsr(msr_no, *l, *h);
diff --git a/arch/x86/lib/msr-smp.c b/arch/x86/lib/msr-smp.c
index ce68b6a..8e704d9 100644
--- a/arch/x86/lib/msr-smp.c
+++ b/arch/x86/lib/msr-smp.c
@@ -3,6 +3,82 @@
 #include <linux/smp.h>
 #include <asm/msr.h>
 
+static void __msr_set_bit_on_cpu(void *info)
+{
+	struct msr_info *bit_info = info;
+
+	msr_set_bit(bit_info->msr_no, bit_info->bit);
+}
+
+static void __msr_clear_bit_on_cpu(void *info)
+{
+	struct msr_info *bit_info = info;
+
+	msr_clear_bit(bit_info->msr_no, bit_info->bit);
+}
+
+int msr_set_bit_on_cpu(unsigned int cpu, u32 msr, u8 bit)
+{
+	struct msr_info info;
+	int err;
+
+	info.msr_no = msr;
+	info.bit = bit;
+
+	err = smp_call_function_single(cpu, __msr_set_bit_on_cpu, &info, 1);
+
+	return err;
+}
+EXPORT_SYMBOL(msr_set_bit_on_cpu);
+
+int msr_clear_bit_on_cpu(unsigned int cpu, u32 msr, u8 bit)
+{
+	struct msr_info info;
+	int err;
+
+	info.msr_no = msr;
+	info.bit = bit;
+
+	err = smp_call_function_single(cpu, __msr_clear_bit_on_cpu, &info, 1);
+
+	return err;
+}
+EXPORT_SYMBOL(msr_clear_bit_on_cpu);
+
+void msr_set_bit_on_cpus(const struct cpumask *mask, u32 msr, u8 bit)
+{
+	struct msr_info info;
+	int this_cpu;
+
+	info.msr_no = msr;
+	info.bit = bit;
+
+	this_cpu = get_cpu();
+	if (cpumask_test_cpu(this_cpu, mask))
+		__msr_set_bit_on_cpu(&info);
+
+	smp_call_function_many(mask, __msr_set_bit_on_cpu, &info, 1);
+	put_cpu();
+}
+EXPORT_SYMBOL(msr_set_bit_on_cpus);
+
+void msr_clear_bit_on_cpus(const struct cpumask *mask, u32 msr, u8 bit)
+{
+	struct msr_info info;
+	int this_cpu;
+
+	info.msr_no = msr;
+	info.bit = bit;
+
+	this_cpu = get_cpu();
+	if (cpumask_test_cpu(this_cpu, mask))
+		__msr_clear_bit_on_cpu(&info);
+
+	smp_call_function_many(mask, __msr_clear_bit_on_cpu, &info, 1);
+	put_cpu();
+}
+EXPORT_SYMBOL(msr_clear_bit_on_cpus);
+
 static void __rdmsr_on_cpu(void *info)
 {
 	struct msr_info *rv = info;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH V3 2/2] perf/x86: add sysfs entry to freeze counter on SMI
  2017-03-27 18:30 [PATCH V3 0/2] measure SMI cost (kernel) kan.liang
  2017-03-27 18:30 ` [PATCH V3 1/2] x86/msr: add msr_set/clear_bit_on_cpu/cpus access functions kan.liang
@ 2017-03-27 18:30 ` kan.liang
  2017-03-28  8:42   ` Thomas Gleixner
  1 sibling, 1 reply; 10+ messages in thread
From: kan.liang @ 2017-03-27 18:30 UTC (permalink / raw)
  To: peterz, tglx, mingo, linux-kernel; +Cc: bp, acme, eranian, jolsa, ak, Kan Liang

From: Kan Liang <Kan.liang@intel.com>

Currently, the SMIs are visible to all performance counters. Because
many users want to measure everything including SMIs. But in some
cases, the SMI cycles should not be count. For example, to calculate
the cost of SMI itself. So a knob is needed.

When setting FREEZE_WHILE_SMM bit in IA32_DEBUGCTL, all performance
counters will be effected. There is no way to do per-counter freeze
on SMI. So it should not use the per-event interface (e.g. ioctl or
event attribute) to set FREEZE_WHILE_SMM bit.

Adds sysfs entry /sys/device/cpu/freeze_on_smi to set FREEZE_WHILE_SMM
bit in IA32_DEBUGCTL. When set, freezes perfmon and trace messages
while in SMM.
Value has to be 0 or 1. It will be applied to all possible cpus.

Signed-off-by: Kan Liang <Kan.liang@intel.com>
---
 arch/x86/events/core.c           | 10 +++++++++
 arch/x86/events/intel/core.c     | 48 ++++++++++++++++++++++++++++++++++++++++
 arch/x86/events/perf_event.h     |  3 +++
 arch/x86/include/asm/msr-index.h |  2 ++
 4 files changed, 63 insertions(+)

diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 349d4d1..c16fb50 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -1750,6 +1750,8 @@ ssize_t x86_event_sysfs_show(char *page, u64 config, u64 event)
 	return ret;
 }
 
+static struct attribute_group x86_pmu_attr_group;
+
 static int __init init_hw_perf_events(void)
 {
 	struct x86_pmu_quirk *quirk;
@@ -1813,6 +1815,14 @@ static int __init init_hw_perf_events(void)
 			x86_pmu_events_group.attrs = tmp;
 	}
 
+	if (x86_pmu.attrs) {
+		struct attribute **tmp;
+
+		tmp = merge_attr(x86_pmu_attr_group.attrs, x86_pmu.attrs);
+		if (!WARN_ON(!tmp))
+			x86_pmu_attr_group.attrs = tmp;
+	}
+
 	pr_info("... version:                %d\n",     x86_pmu.version);
 	pr_info("... bit width:              %d\n",     x86_pmu.cntval_bits);
 	pr_info("... generic registers:      %d\n",     x86_pmu.num_counters);
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 4244bed..ecb321e 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3174,6 +3174,11 @@ static void intel_pmu_cpu_starting(int cpu)
 
 	cpuc->lbr_sel = NULL;
 
+	if (x86_pmu.attr_freeze_on_smi)
+		msr_set_bit_on_cpu(cpu, MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_FREEZE_WHILE_SMM_BIT);
+	else
+		msr_clear_bit_on_cpu(cpu, MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_FREEZE_WHILE_SMM_BIT);
+
 	if (!cpuc->shared_regs)
 		return;
 
@@ -3595,6 +3600,47 @@ static struct attribute *hsw_events_attrs[] = {
 	NULL
 };
 
+static ssize_t freeze_on_smi_show(struct device *cdev,
+				  struct device_attribute *attr,
+				  char *buf)
+{
+	return sprintf(buf, "%d\n", x86_pmu.attr_freeze_on_smi);
+}
+
+static ssize_t freeze_on_smi_store(struct device *cdev,
+				   struct device_attribute *attr,
+				   const char *buf, size_t count)
+{
+	unsigned long val;
+	ssize_t ret;
+
+	ret = kstrtoul(buf, 0, &val);
+	if (ret)
+		return ret;
+
+	if (val > 1)
+		return -EINVAL;
+
+	if (x86_pmu.attr_freeze_on_smi == val)
+		return count;
+
+	if (val)
+		msr_set_bit_on_cpus(cpu_possible_mask, MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_FREEZE_WHILE_SMM_BIT);
+	else
+		msr_clear_bit_on_cpus(cpu_possible_mask, MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_FREEZE_WHILE_SMM_BIT);
+
+	x86_pmu.attr_freeze_on_smi = val;
+
+	return count;
+}
+
+static DEVICE_ATTR_RW(freeze_on_smi);
+
+static struct attribute *intel_pmu_attrs[] = {
+	&dev_attr_freeze_on_smi.attr,
+	NULL,
+};
+
 __init int intel_pmu_init(void)
 {
 	union cpuid10_edx edx;
@@ -3641,6 +3687,8 @@ __init int intel_pmu_init(void)
 
 	x86_pmu.max_pebs_events		= min_t(unsigned, MAX_PEBS_EVENTS, x86_pmu.num_counters);
 
+
+	x86_pmu.attrs			= intel_pmu_attrs;
 	/*
 	 * Quirk: v2 perfmon does not report fixed-purpose events, so
 	 * assume at least 3 events, when not running in a hypervisor:
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index bcbb1d2..110cb9b0 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -561,6 +561,9 @@ struct x86_pmu {
 	ssize_t		(*events_sysfs_show)(char *page, u64 config);
 	struct attribute **cpu_events;
 
+	int		attr_freeze_on_smi;
+	struct attribute **attrs;
+
 	/*
 	 * CPU Hotplug hooks
 	 */
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index d8b5f8a..bdb00fa 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -134,6 +134,8 @@
 #define DEBUGCTLMSR_BTS_OFF_OS		(1UL <<  9)
 #define DEBUGCTLMSR_BTS_OFF_USR		(1UL << 10)
 #define DEBUGCTLMSR_FREEZE_LBRS_ON_PMI	(1UL << 11)
+#define DEBUGCTLMSR_FREEZE_WHILE_SMM_BIT	14
+#define DEBUGCTLMSR_FREEZE_WHILE_SMM	(1UL << DEBUGCTLMSR_FREEZE_WHILE_SMM_BIT)
 
 #define MSR_PEBS_FRONTEND		0x000003f7
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH V3 1/2] x86/msr: add msr_set/clear_bit_on_cpu/cpus access functions
  2017-03-27 18:30 ` [PATCH V3 1/2] x86/msr: add msr_set/clear_bit_on_cpu/cpus access functions kan.liang
@ 2017-03-28  8:36   ` Thomas Gleixner
  2017-03-28 17:23     ` Thomas Gleixner
  0 siblings, 1 reply; 10+ messages in thread
From: Thomas Gleixner @ 2017-03-28  8:36 UTC (permalink / raw)
  To: Kan Liang; +Cc: peterz, mingo, linux-kernel, bp, acme, eranian, jolsa, ak

On Mon, 27 Mar 2017, kan.liang@intel.com wrote:

> From: Kan Liang <Kan.liang@intel.com>
> 
> To flip a MSR bit on many CPUs or specific CPU, currently it has to do
> read-modify-write operation on the MSR through rd/wrmsr_on_cpu(s).
> It actually sends two IPIs to the given CPU.

The IPIs are the least of the problems, really. The real problem is that

       rdmsr_on_cpu()
       wrmsr_on_cpu()

is not atomic. That's what wants to be solved. The reduction of IPIs just a
side effect.

>  #else  /*  CONFIG_SMP  */
> +static inline int msr_set_bit_on_cpu(unsigned int cpu, u32 msr, u8 bit)
> +{
> +	return msr_set_bit(msr, bit);
> +}
> +
> +static inline int msr_clear_bit_on_cpu(unsigned int cpu, u32 msr, u8 bit)
> +{
> +	return msr_clear_bit(msr, bit);
> +}
> +
> +static inline void msr_set_bit_on_cpus(const struct cpumask *mask, u32 msr, u8 bit)
> +{
> +	msr_set_bit(msr, bit);
> +}
> +
> +static inline void msr_clear_bit_on_cpus(const struct cpumask *mask, u32 msr, u8 bit)
> +{
> +	msr_clear_bit(msr, bit);
> +}

This is utter crap because it's fundamentaly different from the SMP
version.

msr_set/clear_bit() are not protected by anyhting. And in your call site
this is invoked from fully preemptible context. What protects against
context switch and interrupts fiddling with DEBUGMSR?

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH V3 2/2] perf/x86: add sysfs entry to freeze counter on SMI
  2017-03-27 18:30 ` [PATCH V3 2/2] perf/x86: add sysfs entry to freeze counter on SMI kan.liang
@ 2017-03-28  8:42   ` Thomas Gleixner
  2017-03-28 13:23     ` Liang, Kan
  0 siblings, 1 reply; 10+ messages in thread
From: Thomas Gleixner @ 2017-03-28  8:42 UTC (permalink / raw)
  To: Kan Liang; +Cc: peterz, mingo, linux-kernel, bp, acme, eranian, jolsa, ak

On Mon, 27 Mar 2017, kan.liang@intel.com wrote:
> +
> +	if (val)
> +		msr_set_bit_on_cpus(cpu_possible_mask, MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_FREEZE_WHILE_SMM_BIT);
> +	else
> +		msr_clear_bit_on_cpus(cpu_possible_mask, MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_FREEZE_WHILE_SMM_BIT);

This is still not protected against CPU hotplug. What's so hard about:

	get_online_cpus();

	if (val) {
		msr_set_bit_on_cpus(cpu_online_mask, MSR_IA32_DEBUGCTLMSR,
				    DEBUGCTLMSR_FREEZE_WHILE_SMM_BIT);
	} else {
		msr_clear_bit_on_cpus(cpu_online_mask, MSR_IA32_DEBUGCTLMSR,
				      DEBUGCTLMSR_FREEZE_WHILE_SMM_BIT);
	}

	put_online_cpus();

Aside of that, when this is set to SMI freeze, what causes a CPU which
comes online after that point to set the bit as well? Nothing AFAICT.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH V3 2/2] perf/x86: add sysfs entry to freeze counter on SMI
  2017-03-28  8:42   ` Thomas Gleixner
@ 2017-03-28 13:23     ` Liang, Kan
  2017-03-28 17:03       ` Thomas Gleixner
  0 siblings, 1 reply; 10+ messages in thread
From: Liang, Kan @ 2017-03-28 13:23 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: peterz, mingo, linux-kernel, bp, acme, eranian, jolsa, ak



> 
> On Mon, 27 Mar 2017, kan.liang@intel.com wrote:
> > +
> > +	if (val)
> > +		msr_set_bit_on_cpus(cpu_possible_mask,
> MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_FREEZE_WHILE_SMM_BIT);
> > +	else
> > +		msr_clear_bit_on_cpus(cpu_possible_mask,
> MSR_IA32_DEBUGCTLMSR,
> > +DEBUGCTLMSR_FREEZE_WHILE_SMM_BIT);
> 
> This is still not protected against CPU hotplug. What's so hard about:
> 
> 	get_online_cpus();
> 
> 	if (val) {
> 		msr_set_bit_on_cpus(cpu_online_mask,
> MSR_IA32_DEBUGCTLMSR,
> 				    DEBUGCTLMSR_FREEZE_WHILE_SMM_BIT);
> 	} else {
> 		msr_clear_bit_on_cpus(cpu_online_mask,
> MSR_IA32_DEBUGCTLMSR,
> 
> DEBUGCTLMSR_FREEZE_WHILE_SMM_BIT);
> 	}
> 
> 	put_online_cpus();
> 
> Aside of that, when this is set to SMI freeze, what causes a CPU which
> comes online after that point to set the bit as well? Nothing AFAICT.
> 
> 

I've patched the intel_pmu_cpu_starting.
I think it guarantees that the new online CPU is set.

@@ -3174,6 +3174,11 @@ static void intel_pmu_cpu_starting(int cpu)
 
 	cpuc->lbr_sel = NULL;
 
+	if (x86_pmu.attr_freeze_on_smi)
+		msr_set_bit_on_cpu(cpu, MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_FREEZE_WHILE_SMM_BIT);
+	else
+		msr_clear_bit_on_cpu(cpu, MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_FREEZE_WHILE_SMM_BIT);
+
 	if (!cpuc->shared_regs)
 		return;

Thanks,
Kan

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH V3 2/2] perf/x86: add sysfs entry to freeze counter on SMI
  2017-03-28 13:23     ` Liang, Kan
@ 2017-03-28 17:03       ` Thomas Gleixner
  0 siblings, 0 replies; 10+ messages in thread
From: Thomas Gleixner @ 2017-03-28 17:03 UTC (permalink / raw)
  To: Liang, Kan; +Cc: peterz, mingo, linux-kernel, bp, acme, eranian, jolsa, ak

On Tue, 28 Mar 2017, Liang, Kan wrote:
> > On Mon, 27 Mar 2017, kan.liang@intel.com wrote:
> > 	put_online_cpus();
> > 
> > Aside of that, when this is set to SMI freeze, what causes a CPU which
> > comes online after that point to set the bit as well? Nothing AFAICT.
> 
> I've patched the intel_pmu_cpu_starting.
> I think it guarantees that the new online CPU is set.

Only when the hotplug protection of the write is in place.....

> @@ -3174,6 +3174,11 @@ static void intel_pmu_cpu_starting(int cpu)
>  
>  	cpuc->lbr_sel = NULL;
>  
> +	if (x86_pmu.attr_freeze_on_smi)
> +		msr_set_bit_on_cpu(cpu, MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_FREEZE_WHILE_SMM_BIT);

Can you please use brackets and line breaks?

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH V3 1/2] x86/msr: add msr_set/clear_bit_on_cpu/cpus access functions
  2017-03-28  8:36   ` Thomas Gleixner
@ 2017-03-28 17:23     ` Thomas Gleixner
  2017-03-28 17:38       ` Liang, Kan
  0 siblings, 1 reply; 10+ messages in thread
From: Thomas Gleixner @ 2017-03-28 17:23 UTC (permalink / raw)
  To: Kan Liang; +Cc: peterz, mingo, linux-kernel, bp, acme, eranian, jolsa, ak

On Tue, 28 Mar 2017, Thomas Gleixner wrote:
> On Mon, 27 Mar 2017, kan.liang@intel.com wrote:
> 
> > From: Kan Liang <Kan.liang@intel.com>
> > 
> > To flip a MSR bit on many CPUs or specific CPU, currently it has to do
> > read-modify-write operation on the MSR through rd/wrmsr_on_cpu(s).
> > It actually sends two IPIs to the given CPU.
> 
> The IPIs are the least of the problems, really. The real problem is that
> 
>        rdmsr_on_cpu()
>        wrmsr_on_cpu()
> 
> is not atomic. That's what wants to be solved. The reduction of IPIs just a
> side effect.
> 
> >  #else  /*  CONFIG_SMP  */
> > +static inline int msr_set_bit_on_cpu(unsigned int cpu, u32 msr, u8 bit)
> > +{
> > +	return msr_set_bit(msr, bit);
> > +}
> > +
> > +static inline int msr_clear_bit_on_cpu(unsigned int cpu, u32 msr, u8 bit)
> > +{
> > +	return msr_clear_bit(msr, bit);
> > +}
> > +
> > +static inline void msr_set_bit_on_cpus(const struct cpumask *mask, u32 msr, u8 bit)
> > +{
> > +	msr_set_bit(msr, bit);
> > +}
> > +
> > +static inline void msr_clear_bit_on_cpus(const struct cpumask *mask, u32 msr, u8 bit)
> > +{
> > +	msr_clear_bit(msr, bit);
> > +}
> 
> This is utter crap because it's fundamentaly different from the SMP
> version.
> 
> msr_set/clear_bit() are not protected by anyhting. And in your call site
> this is invoked from fully preemptible context. What protects against
> context switch and interrupts fiddling with DEBUGMSR?

And thinking more about that whole interface. It's just overkill.

diff --git a/arch/x86/lib/msr.c b/arch/x86/lib/msr.c
index d1dee753b949..35763927adaa 100644
--- a/arch/x86/lib/msr.c
+++ b/arch/x86/lib/msr.c
@@ -58,7 +58,7 @@ int msr_write(u32 msr, struct msr *m)
 	return wrmsrl_safe(msr, m->q);
 }
 
-static inline int __flip_bit(u32 msr, u8 bit, bool set)
+int msr_flip_bit(u32 msr, u8 bit, bool set)
 {
 	struct msr m, m1;
 	int err = -EINVAL;
@@ -85,6 +85,7 @@ static inline int __flip_bit(u32 msr, u8 bit, bool set)
 
 	return 1;
 }
+EXPORT_SYMBOL_GPL(msr_flip_bit);
 
 /**
  * Set @bit in a MSR @msr.
@@ -96,7 +97,7 @@ static inline int __flip_bit(u32 msr, u8 bit, bool set)
  */
 int msr_set_bit(u32 msr, u8 bit)
 {
-	return __flip_bit(msr, bit, true);
+	return msr_flip_bit(msr, bit, true);
 }
 
 /**
@@ -109,7 +110,7 @@ int msr_set_bit(u32 msr, u8 bit)
  */
 int msr_clear_bit(u32 msr, u8 bit)
 {
-	return __flip_bit(msr, bit, false);
+	return msr_flip_bit(msr, bit, false);
 }
 
 #ifdef CONFIG_TRACEPOINTS

And in the driver:

static void flip_smm_bit(void *data)
{
	int val = *(int *)data;
	
	msr_flip_bit(DEBUGMSR, SMMBIT, val);
}

And in the write function:

       smp_call_function(flip_smm_bit, &val, 1);

That avoids all the extra interfaces and requires less code and less
text foot print when unused .....

Thanks,

	tglx

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* RE: [PATCH V3 1/2] x86/msr: add msr_set/clear_bit_on_cpu/cpus access functions
  2017-03-28 17:23     ` Thomas Gleixner
@ 2017-03-28 17:38       ` Liang, Kan
  2017-03-28 18:34         ` Thomas Gleixner
  0 siblings, 1 reply; 10+ messages in thread
From: Liang, Kan @ 2017-03-28 17:38 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: peterz, mingo, linux-kernel, bp, acme, eranian, jolsa, ak



.
> > msr_set/clear_bit() are not protected by anyhting. And in your call
> > site this is invoked from fully preemptible context. What protects
> > against context switch and interrupts fiddling with DEBUGMSR?
> 
> And thinking more about that whole interface. It's just overkill.
> 
> diff --git a/arch/x86/lib/msr.c b/arch/x86/lib/msr.c index
> d1dee753b949..35763927adaa 100644
> --- a/arch/x86/lib/msr.c
> +++ b/arch/x86/lib/msr.c
> @@ -58,7 +58,7 @@ int msr_write(u32 msr, struct msr *m)
>  	return wrmsrl_safe(msr, m->q);
>  }
> 
> -static inline int __flip_bit(u32 msr, u8 bit, bool set)
> +int msr_flip_bit(u32 msr, u8 bit, bool set)
>  {
>  	struct msr m, m1;
>  	int err = -EINVAL;
> @@ -85,6 +85,7 @@ static inline int __flip_bit(u32 msr, u8 bit, bool set)
> 
>  	return 1;
>  }
> +EXPORT_SYMBOL_GPL(msr_flip_bit);
> 
>  /**
>   * Set @bit in a MSR @msr.
> @@ -96,7 +97,7 @@ static inline int __flip_bit(u32 msr, u8 bit, bool set)
>   */
>  int msr_set_bit(u32 msr, u8 bit)
>  {
> -	return __flip_bit(msr, bit, true);
> +	return msr_flip_bit(msr, bit, true);
>  }
> 
>  /**
> @@ -109,7 +110,7 @@ int msr_set_bit(u32 msr, u8 bit)
>   */
>  int msr_clear_bit(u32 msr, u8 bit)
>  {
> -	return __flip_bit(msr, bit, false);
> +	return msr_flip_bit(msr, bit, false);
>  }
> 
>  #ifdef CONFIG_TRACEPOINTS
> 
> And in the driver:
> 
> static void flip_smm_bit(void *data)
> {
> 	int val = *(int *)data;
> 
> 	msr_flip_bit(DEBUGMSR, SMMBIT, val);
> }
> 
> And in the write function:
> 
>        smp_call_function(flip_smm_bit, &val, 1);
> 
> That avoids all the extra interfaces and requires less code and less text foot
> print when unused .....
> 

Thanks. It simplify the code very much.
I think we still need to protect the smp_call_function in the driver, right?
Would be the following code enough?

get_online_cpus();
preempt_disable();
smp_call_function(flip_smm_bit, &val, 1);
preempt_enable();
put_online_cpus();

Thanks,
Kan

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH V3 1/2] x86/msr: add msr_set/clear_bit_on_cpu/cpus access functions
  2017-03-28 17:38       ` Liang, Kan
@ 2017-03-28 18:34         ` Thomas Gleixner
  0 siblings, 0 replies; 10+ messages in thread
From: Thomas Gleixner @ 2017-03-28 18:34 UTC (permalink / raw)
  To: Liang, Kan; +Cc: peterz, mingo, linux-kernel, bp, acme, eranian, jolsa, ak

On Tue, 28 Mar 2017, Liang, Kan wrote:
> Thanks. It simplify the code very much.
> I think we still need to protect the smp_call_function in the driver, right?

Yes.

> Would be the following code enough?
> 
> get_online_cpus();
> preempt_disable();

Only get_online_cpus(). smp_call_function() disables preemption internaly.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-03-28 18:34 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-27 18:30 [PATCH V3 0/2] measure SMI cost (kernel) kan.liang
2017-03-27 18:30 ` [PATCH V3 1/2] x86/msr: add msr_set/clear_bit_on_cpu/cpus access functions kan.liang
2017-03-28  8:36   ` Thomas Gleixner
2017-03-28 17:23     ` Thomas Gleixner
2017-03-28 17:38       ` Liang, Kan
2017-03-28 18:34         ` Thomas Gleixner
2017-03-27 18:30 ` [PATCH V3 2/2] perf/x86: add sysfs entry to freeze counter on SMI kan.liang
2017-03-28  8:42   ` Thomas Gleixner
2017-03-28 13:23     ` Liang, Kan
2017-03-28 17:03       ` Thomas Gleixner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.