All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] combine remote cpu msr access
@ 2015-12-11 22:40 Jacob Pan
  2015-12-11 22:40 ` [PATCH 1/2] x86/msr: add on cpu read/modify/write function Jacob Pan
  2015-12-11 22:40 ` [PATCH 2/2] powercap/rapl: reduce ipi calls Jacob Pan
  0 siblings, 2 replies; 6+ messages in thread
From: Jacob Pan @ 2015-12-11 22:40 UTC (permalink / raw)
  To: LKML, Linux PM, Rafael Wysocki
  Cc: Peter Zijlstra, X86 Kernel, Srinivas Pandruvada, H. Peter Anvin,
	Thomas Gleixner, Ingo Molnar, Jacob Pan

IPI calls to access MSRs on remote CPUs are expensive. So add a lib call
for read/modify/write access and use it in Intel RAPL driver.

Jacob Pan (2):
  x86/msr: add on cpu read/modify/write function
  powercap/rapl: reduce ipi calls

 arch/x86/include/asm/msr.h    | 24 +++++++++++++++
 arch/x86/lib/msr-smp.c        | 47 ++++++++++++++++++++++++++++
 drivers/powercap/intel_rapl.c | 71 ++++++++++++++++++++++---------------------
 3 files changed, 108 insertions(+), 34 deletions(-)

-- 
1.9.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/2] x86/msr: add on cpu read/modify/write function
  2015-12-11 22:40 [PATCH 0/2] combine remote cpu msr access Jacob Pan
@ 2015-12-11 22:40 ` Jacob Pan
  2015-12-20 13:28   ` Thomas Gleixner
  2015-12-11 22:40 ` [PATCH 2/2] powercap/rapl: reduce ipi calls Jacob Pan
  1 sibling, 1 reply; 6+ messages in thread
From: Jacob Pan @ 2015-12-11 22:40 UTC (permalink / raw)
  To: LKML, Linux PM, Rafael Wysocki
  Cc: Peter Zijlstra, X86 Kernel, Srinivas Pandruvada, H. Peter Anvin,
	Thomas Gleixner, Ingo Molnar, Jacob Pan

Remote CPU read/modify/write is often needed but currently without
a lib call. This patch adds an API to perform on CPU safe read/modify/write
so that callers don't have to invent such function.

Based on initial code from:
Peter Zijlstra <peterz@infradead.org>

Suggested-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
---
 arch/x86/include/asm/msr.h | 24 +++++++++++++++++++++++
 arch/x86/lib/msr-smp.c     | 47 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 71 insertions(+)

diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index 77d8b28..6143a47 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -27,6 +27,13 @@ struct msr_info {
 	int err;
 };
 
+struct msr_action {
+	u32 msr_no;
+	u64 mask;
+	u64 bits;
+	int err;
+};
+
 struct msr_regs_info {
 	u32 *regs;
 	int err;
@@ -258,6 +265,7 @@ int rdmsrl_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
 int wrmsrl_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
 int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
 int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
+int rmwmsrl_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 mask, u64 bits);
 #else  /*  CONFIG_SMP  */
 static inline int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
 {
@@ -314,6 +322,22 @@ static inline int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8])
 {
 	return wrmsr_safe_regs(regs);
 }
+static inline int rmwmsrl_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 mask, u64 bits)
+{
+	int err;
+	u64 val;
+
+	err = rdmsrl_safe(msr_no, &val);
+	if (err)
+		goto out;
+
+	val &= ~mask;
+	val |= bits;
+
+	err = wrmsrl_safe(msr_no, val);
+out:
+	return err;
+}
 #endif  /* CONFIG_SMP */
 #endif /* __ASSEMBLY__ */
 #endif /* _ASM_X86_MSR_H */
diff --git a/arch/x86/lib/msr-smp.c b/arch/x86/lib/msr-smp.c
index 518532e..60ed278 100644
--- a/arch/x86/lib/msr-smp.c
+++ b/arch/x86/lib/msr-smp.c
@@ -221,6 +221,53 @@ int rdmsrl_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
 }
 EXPORT_SYMBOL(rdmsrl_safe_on_cpu);
 
+
+static void __rmwmsrl_safe(void *info)
+{
+	int err;
+	struct msr_action *ma = info;
+	u64 val;
+
+	err = rdmsrl_safe(ma->msr_no, &val);
+	if (err)
+		goto out;
+
+	val &= ~ma->mask;
+	val |= ma->bits;
+
+	err = wrmsrl_safe(ma->msr_no, val);
+
+out:
+	ma->err = err;
+}
+
+/**
+ * rmwmsrl_safe_on_cpu: Perform a read/modify/write msr transaction on cpu
+ *
+ * @cpu:  target cpu
+ * @msr:  msr number
+ * @mask: bitmask to change
+ * @bits: bits value for the mask
+ *
+ * Returns zero for success, a negative number on error.
+ */
+int rmwmsrl_safe_on_cpu(unsigned int cpu, u32 msr, u64 mask, u64 bits)
+{
+	int err;
+	struct msr_action ma;
+
+	memset(&ma, 0, sizeof(ma));
+
+	ma.msr_no = msr;
+	ma.mask = mask;
+	ma.bits = bits;
+
+	err = smp_call_function_single(cpu, __rmwmsrl_safe, &ma, 1);
+
+	return err ? err : ma.err;
+}
+EXPORT_SYMBOL(rmwmsrl_safe_on_cpu);
+
 /*
  * These variants are significantly slower, but allows control over
  * the entire 32-bit GPR set.
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/2] powercap/rapl: reduce ipi calls
  2015-12-11 22:40 [PATCH 0/2] combine remote cpu msr access Jacob Pan
  2015-12-11 22:40 ` [PATCH 1/2] x86/msr: add on cpu read/modify/write function Jacob Pan
@ 2015-12-11 22:40 ` Jacob Pan
  1 sibling, 0 replies; 6+ messages in thread
From: Jacob Pan @ 2015-12-11 22:40 UTC (permalink / raw)
  To: LKML, Linux PM, Rafael Wysocki
  Cc: Peter Zijlstra, X86 Kernel, Srinivas Pandruvada, H. Peter Anvin,
	Thomas Gleixner, Ingo Molnar, Jacob Pan

Reduce remote CPU calls for MSR access by combining read
modify write into one function.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
---
 drivers/powercap/intel_rapl.c | 71 ++++++++++++++++++++++---------------------
 1 file changed, 37 insertions(+), 34 deletions(-)

diff --git a/drivers/powercap/intel_rapl.c b/drivers/powercap/intel_rapl.c
index cc97f08..4fe1627 100644
--- a/drivers/powercap/intel_rapl.c
+++ b/drivers/powercap/intel_rapl.c
@@ -805,30 +805,18 @@ static int rapl_write_data_raw(struct rapl_domain *rd,
 			enum rapl_primitives prim,
 			unsigned long long value)
 {
-	u64 msr_val;
-	u32 msr;
 	struct rapl_primitive_info *rp = &rpi[prim];
 	int cpu;
+	u64 bits;
 
 	cpu = find_active_cpu_on_package(rd->package_id);
 	if (cpu < 0)
 		return cpu;
-	msr = rd->msrs[rp->id];
-	if (rdmsrl_safe_on_cpu(cpu, msr, &msr_val)) {
-		dev_dbg(&rd->power_zone.dev,
-			"failed to read msr 0x%x on cpu %d\n", msr, cpu);
-		return -EIO;
-	}
-	value = rapl_unit_xlate(rd, rd->package_id, rp->unit, value, 1);
-	msr_val &= ~rp->mask;
-	msr_val |= value << rp->shift;
-	if (wrmsrl_safe_on_cpu(cpu, msr, msr_val)) {
-		dev_dbg(&rd->power_zone.dev,
-			"failed to write msr 0x%x on cpu %d\n", msr, cpu);
-		return -EIO;
-	}
 
-	return 0;
+	bits = rapl_unit_xlate(rd, rd->package_id, rp->unit, value, 1);
+	bits |= bits << rp->shift;
+
+	return rmwmsrl_safe_on_cpu(cpu, rd->msrs[rp->id], rp->mask, bits);
 }
 
 /*
@@ -893,6 +881,21 @@ static int rapl_check_unit_atom(struct rapl_package *rp, int cpu)
 	return 0;
 }
 
+static void power_limit_irq_save_cpu(void *info)
+{
+	u32 l, h = 0;
+	struct rapl_package *rp = (struct rapl_package *)info;
+
+	/* save the state of PLN irq mask bit before disabling it */
+	rdmsr_safe(MSR_IA32_PACKAGE_THERM_INTERRUPT, &l, &h);
+	if (!(rp->power_limit_irq & PACKAGE_PLN_INT_SAVED)) {
+		rp->power_limit_irq = l & PACKAGE_THERM_INT_PLN_ENABLE;
+		rp->power_limit_irq |= PACKAGE_PLN_INT_SAVED;
+	}
+	l &= ~PACKAGE_THERM_INT_PLN_ENABLE;
+	wrmsr_safe(MSR_IA32_PACKAGE_THERM_INTERRUPT, l, h);
+}
+
 
 /* REVISIT:
  * When package power limit is set artificially low by RAPL, LVT
@@ -906,7 +909,6 @@ static int rapl_check_unit_atom(struct rapl_package *rp, int cpu)
 
 static void package_power_limit_irq_save(int package_id)
 {
-	u32 l, h = 0;
 	int cpu;
 	struct rapl_package *rp;
 
@@ -920,20 +922,27 @@ static void package_power_limit_irq_save(int package_id)
 	cpu = find_active_cpu_on_package(package_id);
 	if (cpu < 0)
 		return;
-	/* save the state of PLN irq mask bit before disabling it */
-	rdmsr_safe_on_cpu(cpu, MSR_IA32_PACKAGE_THERM_INTERRUPT, &l, &h);
-	if (!(rp->power_limit_irq & PACKAGE_PLN_INT_SAVED)) {
-		rp->power_limit_irq = l & PACKAGE_THERM_INT_PLN_ENABLE;
-		rp->power_limit_irq |= PACKAGE_PLN_INT_SAVED;
-	}
-	l &= ~PACKAGE_THERM_INT_PLN_ENABLE;
-	wrmsr_on_cpu(cpu, MSR_IA32_PACKAGE_THERM_INTERRUPT, l, h);
+	smp_call_function_single(cpu, power_limit_irq_save_cpu, rp, 1);
+}
+
+static void power_limit_irq_restore_cpu(void *info)
+{
+	u32 l, h = 0;
+	struct rapl_package *rp = (struct rapl_package *)info;
+
+	rdmsr_safe(MSR_IA32_PACKAGE_THERM_INTERRUPT, &l, &h);
+
+	if (rp->power_limit_irq & PACKAGE_THERM_INT_PLN_ENABLE)
+		l |= PACKAGE_THERM_INT_PLN_ENABLE;
+	else
+		l &= ~PACKAGE_THERM_INT_PLN_ENABLE;
+
+	wrmsr_safe(MSR_IA32_PACKAGE_THERM_INTERRUPT, l, h);
 }
 
 /* restore per package power limit interrupt enable state */
 static void package_power_limit_irq_restore(int package_id)
 {
-	u32 l, h;
 	int cpu;
 	struct rapl_package *rp;
 
@@ -951,14 +960,8 @@ static void package_power_limit_irq_restore(int package_id)
 	/* irq enable state not saved, nothing to restore */
 	if (!(rp->power_limit_irq & PACKAGE_PLN_INT_SAVED))
 		return;
-	rdmsr_safe_on_cpu(cpu, MSR_IA32_PACKAGE_THERM_INTERRUPT, &l, &h);
-
-	if (rp->power_limit_irq & PACKAGE_THERM_INT_PLN_ENABLE)
-		l |= PACKAGE_THERM_INT_PLN_ENABLE;
-	else
-		l &= ~PACKAGE_THERM_INT_PLN_ENABLE;
 
-	wrmsr_on_cpu(cpu, MSR_IA32_PACKAGE_THERM_INTERRUPT, l, h);
+	smp_call_function_single(cpu, power_limit_irq_restore_cpu, rp, 1);
 }
 
 static void set_floor_freq_default(struct rapl_domain *rd, bool mode)
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] x86/msr: add on cpu read/modify/write function
  2015-12-11 22:40 ` [PATCH 1/2] x86/msr: add on cpu read/modify/write function Jacob Pan
@ 2015-12-20 13:28   ` Thomas Gleixner
  2015-12-20 15:27     ` Borislav Petkov
  2016-01-13  1:14     ` Jacob Pan
  0 siblings, 2 replies; 6+ messages in thread
From: Thomas Gleixner @ 2015-12-20 13:28 UTC (permalink / raw)
  To: Jacob Pan
  Cc: LKML, Linux PM, Rafael Wysocki, Peter Zijlstra, X86 Kernel,
	Srinivas Pandruvada, H. Peter Anvin, Ingo Molnar

Jacob,

On Fri, 11 Dec 2015, Jacob Pan wrote:
> +static inline int rmwmsrl_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 mask, u64 bits)
> +{
> +	int err;
> +	u64 val;
> +
> +	err = rdmsrl_safe(msr_no, &val);
> +	if (err)
> +		goto out;
> +
> +	val &= ~mask;
> +	val |= bits;
> +
> +	err = wrmsrl_safe(msr_no, val);
> +out:
> +	return err;
> +}

....

> +static void __rmwmsrl_safe(void *info)
> +{
> +	int err;
> +	struct msr_action *ma = info;
> +	u64 val;
> +
> +	err = rdmsrl_safe(ma->msr_no, &val);
> +	if (err)
> +		goto out;
> +
> +	val &= ~ma->mask;
> +	val |= ma->bits;
> +
> +	err = wrmsrl_safe(ma->msr_no, val);
> +
> +out:
> +	ma->err = err;

So this is a copy of the above !SMP inline. What's wrong with providing:

 int rmwmsrl_safe(msr_no, clear_mask, set_mask)

in x86/lib/msr.c and make the !SMP variant of rdmsrl_safe_on_cpu() and that
variant for the SMP case a simple wrapper around it?

static void remote_rmwmsrl_safe(void *info)
{
	struct msr_action *ma = info;

	return rmwmsrl_safe(ma->msr, ma->clear_mask, ma->set_mask);
}

No gotos, no pointless code duplication. Just simple.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] x86/msr: add on cpu read/modify/write function
  2015-12-20 13:28   ` Thomas Gleixner
@ 2015-12-20 15:27     ` Borislav Petkov
  2016-01-13  1:14     ` Jacob Pan
  1 sibling, 0 replies; 6+ messages in thread
From: Borislav Petkov @ 2015-12-20 15:27 UTC (permalink / raw)
  To: Thomas Gleixner, Jacob Pan
  Cc: LKML, Linux PM, Rafael Wysocki, Peter Zijlstra, X86 Kernel,
	Srinivas Pandruvada, H. Peter Anvin, Ingo Molnar

On Sun, Dec 20, 2015 at 02:28:48PM +0100, Thomas Gleixner wrote:
> So this is a copy of the above !SMP inline. What's wrong with providing:
> 
>  int rmwmsrl_safe(msr_no, clear_mask, set_mask)
> 
> in x86/lib/msr.c and make the !SMP variant of rdmsrl_safe_on_cpu() and that
> variant for the SMP case a simple wrapper around it?
> 
> static void remote_rmwmsrl_safe(void *info)
> {
> 	struct msr_action *ma = info;
> 
> 	return rmwmsrl_safe(ma->msr, ma->clear_mask, ma->set_mask);
> }
> 
> No gotos, no pointless code duplication. Just simple.

TBH, I find this new "rmwmsrl" interface (the name is unreadable, btw)
silly:

It provides a plain read-modify-write on a MSR and nothing more but
patch 2 immediately shows that this interface is insufficient for the
other cases, i.e. package_power_limit_irq_save() for example, where you
need to do something more like check bits or error handling.

So there we do smp_call_function_single() with a function which does the
MSR accesses and whatever else is needed.

So why add the former interface in the first place?

Having driver-specific functions do whatever it is required and then
using a single IPI to run them is much cleaner than adding that
unfortunate function which doesn't really suffice.

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] x86/msr: add on cpu read/modify/write function
  2015-12-20 13:28   ` Thomas Gleixner
  2015-12-20 15:27     ` Borislav Petkov
@ 2016-01-13  1:14     ` Jacob Pan
  1 sibling, 0 replies; 6+ messages in thread
From: Jacob Pan @ 2016-01-13  1:14 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Linux PM, Rafael Wysocki, Peter Zijlstra, X86 Kernel,
	Srinivas Pandruvada, H. Peter Anvin, Ingo Molnar, jacob.jun.pan

On Sun, 20 Dec 2015 14:28:48 +0100 (CET)
Thomas Gleixner <tglx@linutronix.de> wrote:

> So this is a copy of the above !SMP inline. What's wrong with
> providing:
> 
>  int rmwmsrl_safe(msr_no, clear_mask, set_mask)
> 
> in x86/lib/msr.c and make the !SMP variant of rdmsrl_safe_on_cpu()
> and that variant for the SMP case a simple wrapper around it?
> 
> static void remote_rmwmsrl_safe(void *info)
> {
> 	struct msr_action *ma = info;
> 
> 	return rmwmsrl_safe(ma->msr, ma->clear_mask, ma->set_mask);
> }
> 
> No gotos, no pointless code duplication. Just simple.
much better, just sent new code in v2. Sorry for the delay.

Thanks

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2016-01-13  1:16 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-11 22:40 [PATCH 0/2] combine remote cpu msr access Jacob Pan
2015-12-11 22:40 ` [PATCH 1/2] x86/msr: add on cpu read/modify/write function Jacob Pan
2015-12-20 13:28   ` Thomas Gleixner
2015-12-20 15:27     ` Borislav Petkov
2016-01-13  1:14     ` Jacob Pan
2015-12-11 22:40 ` [PATCH 2/2] powercap/rapl: reduce ipi calls Jacob Pan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.