linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch 0/6] x86, perf, cqm: Cleanups and preparation for RDT/CAT
@ 2015-05-19  0:00 Thomas Gleixner
  2015-05-19  0:00 ` [patch 1/6] x86, perf, cqm: Document PQR MSR abuse Thomas Gleixner
                   ` (6 more replies)
  0 siblings, 7 replies; 24+ messages in thread
From: Thomas Gleixner @ 2015-05-19  0:00 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Vikas Shivappa, x86, Matt Fleming, Will Auld,
	Kanaka Juvva

While reviewing the RDT/CAT patches I had to look into the perf CQM
code. As usual when my review mood reaches the grumpiness level, I
start to poke around in the code some more and find stuff which really
sucks.

Here is a step by step patch series, which cleans up and clarifies the
code and prepares it for reuasage of the PQR cache for the RDT/CAT
stuff.

Compile tested only.

Shameless hint @Intel: I have no access to a box with CQM/RDT :)

Thanks,

	tglx


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [patch 1/6] x86, perf, cqm: Document PQR MSR abuse
  2015-05-19  0:00 [patch 0/6] x86, perf, cqm: Cleanups and preparation for RDT/CAT Thomas Gleixner
@ 2015-05-19  0:00 ` Thomas Gleixner
  2015-05-19 11:53   ` Matt Fleming
  2015-05-27 10:02   ` [tip:perf/core] perf/x86/intel/cqm: " tip-bot for Thomas Gleixner
  2015-05-19  0:00 ` [patch 2/6] x86, perf, cqm: Use proper data type Thomas Gleixner
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 24+ messages in thread
From: Thomas Gleixner @ 2015-05-19  0:00 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Vikas Shivappa, x86, Matt Fleming, Will Auld,
	Kanaka Juvva

[-- Attachment #1: x86-perf-cqm-document-PQR-MSR-abuse.patch --]
[-- Type: text/plain, Size: 1470 bytes --]

The cqm code acts like it owns the PQR MSR completely. That's not true
because only the lower 10 bits are used for CQM. The upper 32bits are
used for CLass Of Service ID (closid). Document the abuse. Will be
fixed in a later patch.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/perf_event_intel_cqm.c |   15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

Index: linux/arch/x86/kernel/cpu/perf_event_intel_cqm.c
===================================================================
--- linux.orig/arch/x86/kernel/cpu/perf_event_intel_cqm.c
+++ linux/arch/x86/kernel/cpu/perf_event_intel_cqm.c
@@ -978,7 +978,12 @@ static void intel_cqm_event_start(struct
 		WARN_ON_ONCE(state->rmid);
 
 	state->rmid = rmid;
-	wrmsrl(MSR_IA32_PQR_ASSOC, state->rmid);
+	/*
+	 * This is actually wrong, as the upper 32 bit MSR contain the
+	 * closid which is used for configuring the Cache Allocation
+	 * Technology component.
+	 */
+	wrmsr(MSR_IA32_PQR_ASSOC, rmid, 0);
 
 	raw_spin_unlock_irqrestore(&state->lock, flags);
 }
@@ -998,7 +1003,13 @@ static void intel_cqm_event_stop(struct
 
 	if (!--state->cnt) {
 		state->rmid = 0;
-		wrmsrl(MSR_IA32_PQR_ASSOC, 0);
+		/*
+		 * This is actually wrong, as the upper 32 bit of the
+		 * MSR contain the closid which is used for
+		 * configuring the Cache Allocation Technology
+		 * component.
+		 */
+		wrmsr(MSR_IA32_PQR_ASSOC, 0, 0);
 	} else {
 		WARN_ON_ONCE(!state->rmid);
 	}



^ permalink raw reply	[flat|nested] 24+ messages in thread

* [patch 2/6] x86, perf, cqm: Use proper data type
  2015-05-19  0:00 [patch 0/6] x86, perf, cqm: Cleanups and preparation for RDT/CAT Thomas Gleixner
  2015-05-19  0:00 ` [patch 1/6] x86, perf, cqm: Document PQR MSR abuse Thomas Gleixner
@ 2015-05-19  0:00 ` Thomas Gleixner
  2015-05-19  8:58   ` Matt Fleming
  2015-05-27 10:03   ` [tip:perf/core] perf/x86/intel/cqm: Use proper data types tip-bot for Thomas Gleixner
  2015-05-19  0:00 ` [patch 3/6] x86, perf, cqm: Remove pointless spinlock from state cache Thomas Gleixner
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 24+ messages in thread
From: Thomas Gleixner @ 2015-05-19  0:00 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Vikas Shivappa, x86, Matt Fleming, Will Auld,
	Kanaka Juvva

[-- Attachment #1: x86-perf-cqm-use-proper-data-types.patch --]
[-- Type: text/plain, Size: 1493 bytes --]

int is really not a proper data type for a MSR. Use u32 to make it
clear that we are dealing with a 32bit unsigned hardware value.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/perf_event_intel_cqm.c |    4 ++--
 include/linux/perf_event.h                 |    2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

Index: linux/arch/x86/kernel/cpu/perf_event_intel_cqm.c
===================================================================
--- linux.orig/arch/x86/kernel/cpu/perf_event_intel_cqm.c
+++ linux/arch/x86/kernel/cpu/perf_event_intel_cqm.c
@@ -18,7 +18,7 @@ static unsigned int cqm_l3_scale; /* sup
 
 struct intel_cqm_state {
 	raw_spinlock_t		lock;
-	int			rmid;
+	u32			rmid;
 	int			cnt;
 };
 
@@ -962,7 +962,7 @@ out:
 static void intel_cqm_event_start(struct perf_event *event, int mode)
 {
 	struct intel_cqm_state *state = this_cpu_ptr(&cqm_state);
-	unsigned int rmid = event->hw.cqm_rmid;
+	u32 rmid = event->hw.cqm_rmid;
 	unsigned long flags;
 
 	if (!(event->hw.cqm_state & PERF_HES_STOPPED))
Index: linux/include/linux/perf_event.h
===================================================================
--- linux.orig/include/linux/perf_event.h
+++ linux/include/linux/perf_event.h
@@ -124,7 +124,7 @@ struct hw_perf_event {
 		};
 		struct { /* intel_cqm */
 			int			cqm_state;
-			int			cqm_rmid;
+			u32			cqm_rmid;
 			struct list_head	cqm_events_entry;
 			struct list_head	cqm_groups_entry;
 			struct list_head	cqm_group_entry;



^ permalink raw reply	[flat|nested] 24+ messages in thread

* [patch 3/6] x86, perf, cqm: Remove pointless spinlock from state cache
  2015-05-19  0:00 [patch 0/6] x86, perf, cqm: Cleanups and preparation for RDT/CAT Thomas Gleixner
  2015-05-19  0:00 ` [patch 1/6] x86, perf, cqm: Document PQR MSR abuse Thomas Gleixner
  2015-05-19  0:00 ` [patch 2/6] x86, perf, cqm: Use proper data type Thomas Gleixner
@ 2015-05-19  0:00 ` Thomas Gleixner
  2015-05-19  9:13   ` Matt Fleming
                     ` (2 more replies)
  2015-05-19  0:00 ` [patch 4/6] x86, perf, cqm: Avoid pointless msr write Thomas Gleixner
                   ` (3 subsequent siblings)
  6 siblings, 3 replies; 24+ messages in thread
From: Thomas Gleixner @ 2015-05-19  0:00 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Vikas Shivappa, x86, Matt Fleming, Will Auld,
	Kanaka Juvva

[-- Attachment #1: x86-perf-cqm-remove-pointless-spinlock.patch --]
[-- Type: text/plain, Size: 2730 bytes --]

struct intel_cqm_state is a strict per cpu cache of the rmid and the
usage counter. It can never be modified from a remote cpu.

The 3 functions which modify the content: start, stop and del (del
maps to stop) are called from the perf core with interrupts disabled
which is enough protection for the per cpu state values.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/perf_event_intel_cqm.c |   17 ++++++-----------
 1 file changed, 6 insertions(+), 11 deletions(-)

Index: linux/arch/x86/kernel/cpu/perf_event_intel_cqm.c
===================================================================
--- linux.orig/arch/x86/kernel/cpu/perf_event_intel_cqm.c
+++ linux/arch/x86/kernel/cpu/perf_event_intel_cqm.c
@@ -17,11 +17,16 @@ static unsigned int cqm_max_rmid = -1;
 static unsigned int cqm_l3_scale; /* supposedly cacheline size */
 
 struct intel_cqm_state {
-	raw_spinlock_t		lock;
 	u32			rmid;
 	int			cnt;
 };
 
+/*
+ * The cached intel_cqm_state is strictly per cpu and can never be
+ * updated from a remote cpu. Both functions which modify the state
+ * (intel_cqm_event_start and intel_cqm_event_stop) are called with
+ * interrupts disabled, which is sufficient for the protection.
+ */
 static DEFINE_PER_CPU(struct intel_cqm_state, cqm_state);
 
 /*
@@ -963,15 +968,12 @@ static void intel_cqm_event_start(struct
 {
 	struct intel_cqm_state *state = this_cpu_ptr(&cqm_state);
 	u32 rmid = event->hw.cqm_rmid;
-	unsigned long flags;
 
 	if (!(event->hw.cqm_state & PERF_HES_STOPPED))
 		return;
 
 	event->hw.cqm_state &= ~PERF_HES_STOPPED;
 
-	raw_spin_lock_irqsave(&state->lock, flags);
-
 	if (state->cnt++)
 		WARN_ON_ONCE(state->rmid != rmid);
 	else
@@ -984,21 +986,17 @@ static void intel_cqm_event_start(struct
 	 * Technology component.
 	 */
 	wrmsr(MSR_IA32_PQR_ASSOC, rmid, 0);
-
-	raw_spin_unlock_irqrestore(&state->lock, flags);
 }
 
 static void intel_cqm_event_stop(struct perf_event *event, int mode)
 {
 	struct intel_cqm_state *state = this_cpu_ptr(&cqm_state);
-	unsigned long flags;
 
 	if (event->hw.cqm_state & PERF_HES_STOPPED)
 		return;
 
 	event->hw.cqm_state |= PERF_HES_STOPPED;
 
-	raw_spin_lock_irqsave(&state->lock, flags);
 	intel_cqm_event_read(event);
 
 	if (!--state->cnt) {
@@ -1013,8 +1011,6 @@ static void intel_cqm_event_stop(struct
 	} else {
 		WARN_ON_ONCE(!state->rmid);
 	}
-
-	raw_spin_unlock_irqrestore(&state->lock, flags);
 }
 
 static int intel_cqm_event_add(struct perf_event *event, int mode)
@@ -1257,7 +1253,6 @@ static void intel_cqm_cpu_prepare(unsign
 	struct intel_cqm_state *state = &per_cpu(cqm_state, cpu);
 	struct cpuinfo_x86 *c = &cpu_data(cpu);
 
-	raw_spin_lock_init(&state->lock);
 	state->rmid = 0;
 	state->cnt  = 0;
 



^ permalink raw reply	[flat|nested] 24+ messages in thread

* [patch 4/6] x86, perf, cqm: Avoid pointless msr write
  2015-05-19  0:00 [patch 0/6] x86, perf, cqm: Cleanups and preparation for RDT/CAT Thomas Gleixner
                   ` (2 preceding siblings ...)
  2015-05-19  0:00 ` [patch 3/6] x86, perf, cqm: Remove pointless spinlock from state cache Thomas Gleixner
@ 2015-05-19  0:00 ` Thomas Gleixner
  2015-05-19  9:17   ` Matt Fleming
  2015-05-27 10:03   ` [tip:perf/core] perf/x86/intel/cqm: Avoid pointless MSR write tip-bot for Thomas Gleixner
  2015-05-19  0:00 ` [patch 5/6] x86, perf, cqm: Remove useless wrapper function Thomas Gleixner
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 24+ messages in thread
From: Thomas Gleixner @ 2015-05-19  0:00 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Vikas Shivappa, x86, Matt Fleming, Will Auld,
	Kanaka Juvva

[-- Attachment #1: x86-perf-cqm-avoid-pointless-msr-write.patch --]
[-- Type: text/plain, Size: 817 bytes --]

If the usage counter is non-zero there is no point to update the rmid
in the PQR MSR.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/perf_event_intel_cqm.c |    8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

Index: linux/arch/x86/kernel/cpu/perf_event_intel_cqm.c
===================================================================
--- linux.orig/arch/x86/kernel/cpu/perf_event_intel_cqm.c
+++ linux/arch/x86/kernel/cpu/perf_event_intel_cqm.c
@@ -974,10 +974,12 @@ static void intel_cqm_event_start(struct
 
 	event->hw.cqm_state &= ~PERF_HES_STOPPED;
 
-	if (state->cnt++)
-		WARN_ON_ONCE(state->rmid != rmid);
-	else
+	if (state->cnt++) {
+		if (!WARN_ON_ONCE(state->rmid != rmid))
+			return;
+	} else {
 		WARN_ON_ONCE(state->rmid);
+	}
 
 	state->rmid = rmid;
 	/*



^ permalink raw reply	[flat|nested] 24+ messages in thread

* [patch 5/6] x86, perf, cqm: Remove useless wrapper function
  2015-05-19  0:00 [patch 0/6] x86, perf, cqm: Cleanups and preparation for RDT/CAT Thomas Gleixner
                   ` (3 preceding siblings ...)
  2015-05-19  0:00 ` [patch 4/6] x86, perf, cqm: Avoid pointless msr write Thomas Gleixner
@ 2015-05-19  0:00 ` Thomas Gleixner
  2015-05-19  9:18   ` Matt Fleming
  2015-05-27 10:04   ` [tip:perf/core] perf/x86/intel/cqm: " tip-bot for Thomas Gleixner
  2015-05-19  0:00 ` [patch 6/6] x86, perf, cqm: Add storage for closid and cleanup struct intel_pqr_state Thomas Gleixner
  2015-05-19  7:42 ` [patch 0/6] x86, perf, cqm: Cleanups and preparation for RDT/CAT Peter Zijlstra
  6 siblings, 2 replies; 24+ messages in thread
From: Thomas Gleixner @ 2015-05-19  0:00 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Vikas Shivappa, x86, Matt Fleming, Will Auld,
	Kanaka Juvva

[-- Attachment #1: x86-perf-cqm-remove-useless-wrapper-function.patch --]
[-- Type: text/plain, Size: 1157 bytes --]

intel_cqm_event_del is a 1:1 wrapper for intel_cqm_event_stop. Remove
the useless gunk.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/perf_event_intel_cqm.c |    7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

Index: linux/arch/x86/kernel/cpu/perf_event_intel_cqm.c
===================================================================
--- linux.orig/arch/x86/kernel/cpu/perf_event_intel_cqm.c
+++ linux/arch/x86/kernel/cpu/perf_event_intel_cqm.c
@@ -1033,11 +1033,6 @@ static int intel_cqm_event_add(struct pe
 	return 0;
 }
 
-static void intel_cqm_event_del(struct perf_event *event, int mode)
-{
-	intel_cqm_event_stop(event, mode);
-}
-
 static void intel_cqm_event_destroy(struct perf_event *event)
 {
 	struct perf_event *group_other = NULL;
@@ -1230,7 +1225,7 @@ static struct pmu intel_cqm_pmu = {
 	.task_ctx_nr	     = perf_sw_context,
 	.event_init	     = intel_cqm_event_init,
 	.add		     = intel_cqm_event_add,
-	.del		     = intel_cqm_event_del,
+	.del		     = intel_cqm_event_stop,
 	.start		     = intel_cqm_event_start,
 	.stop		     = intel_cqm_event_stop,
 	.read		     = intel_cqm_event_read,



^ permalink raw reply	[flat|nested] 24+ messages in thread

* [patch 6/6] x86, perf, cqm: Add storage for closid and cleanup struct intel_pqr_state
  2015-05-19  0:00 [patch 0/6] x86, perf, cqm: Cleanups and preparation for RDT/CAT Thomas Gleixner
                   ` (4 preceding siblings ...)
  2015-05-19  0:00 ` [patch 5/6] x86, perf, cqm: Remove useless wrapper function Thomas Gleixner
@ 2015-05-19  0:00 ` Thomas Gleixner
  2015-05-19 11:54   ` Matt Fleming
  2015-05-27 10:04   ` [tip:perf/core] perf/x86/intel/cqm: Add storage for 'closid' and clean up 'struct intel_pqr_state' tip-bot for Thomas Gleixner
  2015-05-19  7:42 ` [patch 0/6] x86, perf, cqm: Cleanups and preparation for RDT/CAT Peter Zijlstra
  6 siblings, 2 replies; 24+ messages in thread
From: Thomas Gleixner @ 2015-05-19  0:00 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Vikas Shivappa, x86, Matt Fleming, Will Auld,
	Kanaka Juvva

[-- Attachment #1: x86-perf-cqm-add-storage-for-closid.patch --]
[-- Type: text/plain, Size: 4299 bytes --]

closid (CLass Of Service ID) is used for the Class based Cache
Allocation Technology (CAT). Add explicit storage to the per cpu cache
for it, so it can be used later with the CAT support (requires to move
the per cpu data).

While at it:

 - Rename the structure to intel_pqr_state which reflects the actual
   purpose of the struct: Cache values which go into the PQR MSR

 - Rename 'cnt' to rmid_usecnt which reflects the actual purpose of
   the counter.

 - Document the structure and the struct members.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/perf_event_intel_cqm.c |   50 +++++++++++++++--------------
 1 file changed, 27 insertions(+), 23 deletions(-)

Index: linux/arch/x86/kernel/cpu/perf_event_intel_cqm.c
===================================================================
--- linux.orig/arch/x86/kernel/cpu/perf_event_intel_cqm.c
+++ linux/arch/x86/kernel/cpu/perf_event_intel_cqm.c
@@ -16,18 +16,32 @@
 static unsigned int cqm_max_rmid = -1;
 static unsigned int cqm_l3_scale; /* supposedly cacheline size */
 
-struct intel_cqm_state {
+/**
+ * struct intel_pqr_state - State cache for the PQR MSR
+ * @rmid:	The cached Resource Monitoring ID
+ * @closid:	The cached Class Of Service ID
+ * @usecnt:	The usage counter for rmid
+ *
+ * The upper 32 bits of MSR_IA32_PQR_ASSOC contain closid and the
+ * lower 10 bits rmid. The update to MSR_IA32_PQR_ASSOC always
+ * contains both parts, so we need to cache them.
+ *
+ * The cache also helps to avoid pointless updates if the value does
+ * not change.
+ */
+struct intel_pqr_state {
 	u32			rmid;
-	int			cnt;
+	u32			closid;
+	int			rmid_usecnt;
 };
 
 /*
- * The cached intel_cqm_state is strictly per cpu and can never be
+ * The cached intel_pqr_state is strictly per cpu and can never be
  * updated from a remote cpu. Both functions which modify the state
  * (intel_cqm_event_start and intel_cqm_event_stop) are called with
  * interrupts disabled, which is sufficient for the protection.
  */
-static DEFINE_PER_CPU(struct intel_cqm_state, cqm_state);
+static DEFINE_PER_CPU(struct intel_pqr_state, pqr_state);
 
 /*
  * Protects cache_cgroups and cqm_rmid_free_lru and cqm_rmid_limbo_lru.
@@ -966,7 +980,7 @@ out:
 
 static void intel_cqm_event_start(struct perf_event *event, int mode)
 {
-	struct intel_cqm_state *state = this_cpu_ptr(&cqm_state);
+	struct intel_pqr_state *state = this_cpu_ptr(&pqr_state);
 	u32 rmid = event->hw.cqm_rmid;
 
 	if (!(event->hw.cqm_state & PERF_HES_STOPPED))
@@ -974,7 +988,7 @@ static void intel_cqm_event_start(struct
 
 	event->hw.cqm_state &= ~PERF_HES_STOPPED;
 
-	if (state->cnt++) {
+	if (state->rmid_usecnt++) {
 		if (!WARN_ON_ONCE(state->rmid != rmid))
 			return;
 	} else {
@@ -982,17 +996,12 @@ static void intel_cqm_event_start(struct
 	}
 
 	state->rmid = rmid;
-	/*
-	 * This is actually wrong, as the upper 32 bit MSR contain the
-	 * closid which is used for configuring the Cache Allocation
-	 * Technology component.
-	 */
-	wrmsr(MSR_IA32_PQR_ASSOC, rmid, 0);
+	wrmsr(MSR_IA32_PQR_ASSOC, rmid, state->closid);
 }
 
 static void intel_cqm_event_stop(struct perf_event *event, int mode)
 {
-	struct intel_cqm_state *state = this_cpu_ptr(&cqm_state);
+	struct intel_pqr_state *state = this_cpu_ptr(&pqr_state);
 
 	if (event->hw.cqm_state & PERF_HES_STOPPED)
 		return;
@@ -1001,15 +1010,9 @@ static void intel_cqm_event_stop(struct
 
 	intel_cqm_event_read(event);
 
-	if (!--state->cnt) {
+	if (!--state->rmid_usecnt) {
 		state->rmid = 0;
-		/*
-		 * This is actually wrong, as the upper 32 bit of the
-		 * MSR contain the closid which is used for
-		 * configuring the Cache Allocation Technology
-		 * component.
-		 */
-		wrmsr(MSR_IA32_PQR_ASSOC, 0, 0);
+		wrmsr(MSR_IA32_PQR_ASSOC, 0, state->closid);
 	} else {
 		WARN_ON_ONCE(!state->rmid);
 	}
@@ -1247,11 +1250,12 @@ static inline void cqm_pick_event_reader
 
 static void intel_cqm_cpu_prepare(unsigned int cpu)
 {
-	struct intel_cqm_state *state = &per_cpu(cqm_state, cpu);
+	struct intel_pqr_state *state = &per_cpu(pqr_state, cpu);
 	struct cpuinfo_x86 *c = &cpu_data(cpu);
 
 	state->rmid = 0;
-	state->cnt  = 0;
+	state->closid = 0;
+	state->rmid_usecnt = 0;
 
 	WARN_ON(c->x86_cache_max_rmid != cqm_max_rmid);
 	WARN_ON(c->x86_cache_occ_scale != cqm_l3_scale);



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [patch 0/6] x86, perf, cqm: Cleanups and preparation for RDT/CAT
  2015-05-19  0:00 [patch 0/6] x86, perf, cqm: Cleanups and preparation for RDT/CAT Thomas Gleixner
                   ` (5 preceding siblings ...)
  2015-05-19  0:00 ` [patch 6/6] x86, perf, cqm: Add storage for closid and cleanup struct intel_pqr_state Thomas Gleixner
@ 2015-05-19  7:42 ` Peter Zijlstra
  6 siblings, 0 replies; 24+ messages in thread
From: Peter Zijlstra @ 2015-05-19  7:42 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Vikas Shivappa, x86, Matt Fleming, Will Auld, Kanaka Juvva

On Tue, May 19, 2015 at 12:00:48AM -0000, Thomas Gleixner wrote:
> While reviewing the RDT/CAT patches I had to look into the perf CQM
> code. As usual when my review mood reaches the grumpiness level, I
> start to poke around in the code some more and find stuff which really
> sucks.
> 
> Here is a step by step patch series, which cleans up and clarifies the
> code and prepares it for reuasage of the PQR cache for the RDT/CAT
> stuff.
> 
> Compile tested only.
> 
> Shameless hint @Intel: I have no access to a box with CQM/RDT :)

Thanks!

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [patch 2/6] x86, perf, cqm: Use proper data type
  2015-05-19  0:00 ` [patch 2/6] x86, perf, cqm: Use proper data type Thomas Gleixner
@ 2015-05-19  8:58   ` Matt Fleming
  2015-05-19 13:03     ` Thomas Gleixner
  2015-05-27 10:03   ` [tip:perf/core] perf/x86/intel/cqm: Use proper data types tip-bot for Thomas Gleixner
  1 sibling, 1 reply; 24+ messages in thread
From: Matt Fleming @ 2015-05-19  8:58 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Peter Zijlstra, Vikas Shivappa, x86, Matt Fleming,
	Will Auld, Kanaka Juvva

On Tue, 19 May, at 12:00:51AM, Thomas Gleixner wrote:
> int is really not a proper data type for a MSR. Use u32 to make it
> clear that we are dealing with a 32bit unsigned hardware value.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/kernel/cpu/perf_event_intel_cqm.c |    4 ++--
>  include/linux/perf_event.h                 |    2 +-
>  2 files changed, 3 insertions(+), 3 deletions(-)

Yeah, makes sense, but this is missing a bunch of changes to other
functions that pass rmids around.

Lemme take a swing at that.

-- 
Matt Fleming, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [patch 3/6] x86, perf, cqm: Remove pointless spinlock from state cache
  2015-05-19  0:00 ` [patch 3/6] x86, perf, cqm: Remove pointless spinlock from state cache Thomas Gleixner
@ 2015-05-19  9:13   ` Matt Fleming
  2015-05-19 10:51     ` Peter Zijlstra
  2015-05-27 10:03   ` [tip:perf/core] perf/x86/intel/cqm: " tip-bot for Thomas Gleixner
  2015-06-05 18:13   ` [patch 3/6] x86, perf, cqm: " Juvva, Kanaka D
  2 siblings, 1 reply; 24+ messages in thread
From: Matt Fleming @ 2015-05-19  9:13 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Peter Zijlstra, Vikas Shivappa, x86, Matt Fleming,
	Will Auld, Kanaka Juvva

On Tue, 19 May, at 12:00:53AM, Thomas Gleixner wrote:
> struct intel_cqm_state is a strict per cpu cache of the rmid and the
> usage counter. It can never be modified from a remote cpu.
> 
> The 3 functions which modify the content: start, stop and del (del
> maps to stop) are called from the perf core with interrupts disabled
> which is enough protection for the per cpu state values.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/kernel/cpu/perf_event_intel_cqm.c |   17 ++++++-----------
>  1 file changed, 6 insertions(+), 11 deletions(-)

The state locking code was taken from Peter's original patch last year,
so it would be good for him to chime in that this is safe. It's probably
just that it was necessary in Peter's patches but after I refactored
bits I forgot to rip it out.

But yeah, from reading the code again the lock does look entirely
superfluous.

So unless Peter complains,

Acked-by: Matt Fleming <matt.fleming@intel.com>

-- 
Matt Fleming, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [patch 4/6] x86, perf, cqm: Avoid pointless msr write
  2015-05-19  0:00 ` [patch 4/6] x86, perf, cqm: Avoid pointless msr write Thomas Gleixner
@ 2015-05-19  9:17   ` Matt Fleming
  2015-05-27 10:03   ` [tip:perf/core] perf/x86/intel/cqm: Avoid pointless MSR write tip-bot for Thomas Gleixner
  1 sibling, 0 replies; 24+ messages in thread
From: Matt Fleming @ 2015-05-19  9:17 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Peter Zijlstra, Vikas Shivappa, x86, Matt Fleming,
	Will Auld, Kanaka Juvva

On Tue, 19 May, at 12:00:55AM, Thomas Gleixner wrote:
> If the usage counter is non-zero there is no point to update the rmid
> in the PQR MSR.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/kernel/cpu/perf_event_intel_cqm.c |    8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)

Good catch.

Acked-by: Matt Fleming <matt.fleming@intel.com>

-- 
Matt Fleming, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [patch 5/6] x86, perf, cqm: Remove useless wrapper function
  2015-05-19  0:00 ` [patch 5/6] x86, perf, cqm: Remove useless wrapper function Thomas Gleixner
@ 2015-05-19  9:18   ` Matt Fleming
  2015-05-27 10:04   ` [tip:perf/core] perf/x86/intel/cqm: " tip-bot for Thomas Gleixner
  1 sibling, 0 replies; 24+ messages in thread
From: Matt Fleming @ 2015-05-19  9:18 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Peter Zijlstra, Vikas Shivappa, x86, Matt Fleming,
	Will Auld, Kanaka Juvva

On Tue, 19 May, at 12:00:56AM, Thomas Gleixner wrote:
> intel_cqm_event_del is a 1:1 wrapper for intel_cqm_event_stop. Remove
> the useless gunk.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/kernel/cpu/perf_event_intel_cqm.c |    7 +------
>  1 file changed, 1 insertion(+), 6 deletions(-)

Acked-by: Matt Fleming <matt.fleming@intel.com>

-- 
Matt Fleming, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [patch 3/6] x86, perf, cqm: Remove pointless spinlock from state cache
  2015-05-19  9:13   ` Matt Fleming
@ 2015-05-19 10:51     ` Peter Zijlstra
  0 siblings, 0 replies; 24+ messages in thread
From: Peter Zijlstra @ 2015-05-19 10:51 UTC (permalink / raw)
  To: Matt Fleming
  Cc: Thomas Gleixner, LKML, Vikas Shivappa, x86, Matt Fleming,
	Will Auld, Kanaka Juvva

On Tue, May 19, 2015 at 10:13:18AM +0100, Matt Fleming wrote:
> On Tue, 19 May, at 12:00:53AM, Thomas Gleixner wrote:
> > struct intel_cqm_state is a strict per cpu cache of the rmid and the
> > usage counter. It can never be modified from a remote cpu.
> > 
> > The 3 functions which modify the content: start, stop and del (del
> > maps to stop) are called from the perf core with interrupts disabled
> > which is enough protection for the per cpu state values.
> > 
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > ---
> >  arch/x86/kernel/cpu/perf_event_intel_cqm.c |   17 ++++++-----------
> >  1 file changed, 6 insertions(+), 11 deletions(-)
> 
> The state locking code was taken from Peter's original patch last year,
> so it would be good for him to chime in that this is safe. It's probably
> just that it was necessary in Peter's patches but after I refactored
> bits I forgot to rip it out.
> 
> But yeah, from reading the code again the lock does look entirely
> superfluous.

I think that all stems from a point in time when it wasn't at all clear
to me what the hardware looked like, but what do I know, I can't even
remember last week.

All the patches looked good to me, so I already queued them.

I'll add your Ack on them.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [patch 1/6] x86, perf, cqm: Document PQR MSR abuse
  2015-05-19  0:00 ` [patch 1/6] x86, perf, cqm: Document PQR MSR abuse Thomas Gleixner
@ 2015-05-19 11:53   ` Matt Fleming
  2015-05-27 10:02   ` [tip:perf/core] perf/x86/intel/cqm: " tip-bot for Thomas Gleixner
  1 sibling, 0 replies; 24+ messages in thread
From: Matt Fleming @ 2015-05-19 11:53 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Peter Zijlstra, Vikas Shivappa, x86, Matt Fleming,
	Will Auld, Kanaka Juvva

On Tue, 19 May, at 12:00:50AM, Thomas Gleixner wrote:
> The cqm code acts like it owns the PQR MSR completely. That's not true
> because only the lower 10 bits are used for CQM. The upper 32bits are
> used for CLass Of Service ID (closid). Document the abuse. Will be
> fixed in a later patch.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/kernel/cpu/perf_event_intel_cqm.c |   15 +++++++++++++--
>  1 file changed, 13 insertions(+), 2 deletions(-)

Acked-by: Matt Fleming <matt.fleming@intel.com>

-- 
Matt Fleming, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [patch 6/6] x86, perf, cqm: Add storage for closid and cleanup struct intel_pqr_state
  2015-05-19  0:00 ` [patch 6/6] x86, perf, cqm: Add storage for closid and cleanup struct intel_pqr_state Thomas Gleixner
@ 2015-05-19 11:54   ` Matt Fleming
  2015-05-19 12:59     ` Thomas Gleixner
  2015-05-27 10:04   ` [tip:perf/core] perf/x86/intel/cqm: Add storage for 'closid' and clean up 'struct intel_pqr_state' tip-bot for Thomas Gleixner
  1 sibling, 1 reply; 24+ messages in thread
From: Matt Fleming @ 2015-05-19 11:54 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Peter Zijlstra, Vikas Shivappa, x86, Matt Fleming,
	Will Auld, Kanaka Juvva

On Tue, 19 May, at 12:00:58AM, Thomas Gleixner wrote:
> closid (CLass Of Service ID) is used for the Class based Cache
> Allocation Technology (CAT). Add explicit storage to the per cpu cache
> for it, so it can be used later with the CAT support (requires to move
> the per cpu data).
> 
> While at it:
> 
>  - Rename the structure to intel_pqr_state which reflects the actual
>    purpose of the struct: Cache values which go into the PQR MSR
> 
>  - Rename 'cnt' to rmid_usecnt which reflects the actual purpose of
>    the counter.
> 
>  - Document the structure and the struct members.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/kernel/cpu/perf_event_intel_cqm.c |   50 +++++++++++++++--------------
>  1 file changed, 27 insertions(+), 23 deletions(-)
> 
> Index: linux/arch/x86/kernel/cpu/perf_event_intel_cqm.c
> ===================================================================
> --- linux.orig/arch/x86/kernel/cpu/perf_event_intel_cqm.c
> +++ linux/arch/x86/kernel/cpu/perf_event_intel_cqm.c
> @@ -16,18 +16,32 @@
>  static unsigned int cqm_max_rmid = -1;
>  static unsigned int cqm_l3_scale; /* supposedly cacheline size */
>  
> -struct intel_cqm_state {
> +/**
> + * struct intel_pqr_state - State cache for the PQR MSR
> + * @rmid:	The cached Resource Monitoring ID
> + * @closid:	The cached Class Of Service ID
> + * @usecnt:	The usage counter for rmid
> + *

Typo? Should be @rmid_usecnt.

Otherwise,

Acked-by: Matt Fleming <matt.fleming@intel.com>

-- 
Matt Fleming, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [patch 6/6] x86, perf, cqm: Add storage for closid and cleanup struct intel_pqr_state
  2015-05-19 11:54   ` Matt Fleming
@ 2015-05-19 12:59     ` Thomas Gleixner
  0 siblings, 0 replies; 24+ messages in thread
From: Thomas Gleixner @ 2015-05-19 12:59 UTC (permalink / raw)
  To: Matt Fleming
  Cc: LKML, Peter Zijlstra, Vikas Shivappa, x86, Matt Fleming,
	Will Auld, Kanaka Juvva

On Tue, 19 May 2015, Matt Fleming wrote:
> On Tue, 19 May, at 12:00:58AM, Thomas Gleixner wrote:
> > closid (CLass Of Service ID) is used for the Class based Cache
> > Allocation Technology (CAT). Add explicit storage to the per cpu cache
> > for it, so it can be used later with the CAT support (requires to move
> > the per cpu data).
> > 
> > While at it:
> > 
> >  - Rename the structure to intel_pqr_state which reflects the actual
> >    purpose of the struct: Cache values which go into the PQR MSR
> > 
> >  - Rename 'cnt' to rmid_usecnt which reflects the actual purpose of
> >    the counter.
> > 
> >  - Document the structure and the struct members.
> > 
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > ---
> >  arch/x86/kernel/cpu/perf_event_intel_cqm.c |   50 +++++++++++++++--------------
> >  1 file changed, 27 insertions(+), 23 deletions(-)
> > 
> > Index: linux/arch/x86/kernel/cpu/perf_event_intel_cqm.c
> > ===================================================================
> > --- linux.orig/arch/x86/kernel/cpu/perf_event_intel_cqm.c
> > +++ linux/arch/x86/kernel/cpu/perf_event_intel_cqm.c
> > @@ -16,18 +16,32 @@
> >  static unsigned int cqm_max_rmid = -1;
> >  static unsigned int cqm_l3_scale; /* supposedly cacheline size */
> >  
> > -struct intel_cqm_state {
> > +/**
> > + * struct intel_pqr_state - State cache for the PQR MSR
> > + * @rmid:	The cached Resource Monitoring ID
> > + * @closid:	The cached Class Of Service ID
> > + * @usecnt:	The usage counter for rmid
> > + *
> 
> Typo? Should be @rmid_usecnt.

Indeed.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [patch 2/6] x86, perf, cqm: Use proper data type
  2015-05-19  8:58   ` Matt Fleming
@ 2015-05-19 13:03     ` Thomas Gleixner
  0 siblings, 0 replies; 24+ messages in thread
From: Thomas Gleixner @ 2015-05-19 13:03 UTC (permalink / raw)
  To: Matt Fleming
  Cc: LKML, Peter Zijlstra, Vikas Shivappa, x86, Matt Fleming,
	Will Auld, Kanaka Juvva

On Tue, 19 May 2015, Matt Fleming wrote:

> On Tue, 19 May, at 12:00:51AM, Thomas Gleixner wrote:
> > int is really not a proper data type for a MSR. Use u32 to make it
> > clear that we are dealing with a 32bit unsigned hardware value.
> > 
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > ---
> >  arch/x86/kernel/cpu/perf_event_intel_cqm.c |    4 ++--
> >  include/linux/perf_event.h                 |    2 +-
> >  2 files changed, 3 insertions(+), 3 deletions(-)
> 
> Yeah, makes sense, but this is missing a bunch of changes to other
> functions that pass rmids around.

Right. I cared about the stuff which handles the cached state.
 
> Lemme take a swing at that.

Yes, please.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [tip:perf/core] perf/x86/intel/cqm: Document PQR MSR abuse
  2015-05-19  0:00 ` [patch 1/6] x86, perf, cqm: Document PQR MSR abuse Thomas Gleixner
  2015-05-19 11:53   ` Matt Fleming
@ 2015-05-27 10:02   ` tip-bot for Thomas Gleixner
  1 sibling, 0 replies; 24+ messages in thread
From: tip-bot for Thomas Gleixner @ 2015-05-27 10:02 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: peterz, will.auld, kanaka.d.juvva, vikas.shivappa, mingo, tglx,
	torvalds, linux-kernel, matt.fleming, hpa

Commit-ID:  f4d9757ca6f5a2db6919a5b1ab86b8afa16773d0
Gitweb:     http://git.kernel.org/tip/f4d9757ca6f5a2db6919a5b1ab86b8afa16773d0
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Tue, 19 May 2015 00:00:50 +0000
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 27 May 2015 09:17:38 +0200

perf/x86/intel/cqm: Document PQR MSR abuse

The CQM code acts like it owns the PQR MSR completely. That's not true
because only the lower 10 bits are used for CQM. The upper 32 bits are
used for the 'CLass Of Service ID' (CLOSID). Document the abuse. Will be
fixed in a later patch.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Matt Fleming <matt.fleming@intel.com>
Cc: Kanaka Juvva <kanaka.d.juvva@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Vikas Shivappa <vikas.shivappa@linux.intel.com>
Cc: Will Auld <will.auld@intel.com>
Link: http://lkml.kernel.org/r/20150518235149.823214798@linutronix.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/kernel/cpu/perf_event_intel_cqm.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel_cqm.c b/arch/x86/kernel/cpu/perf_event_intel_cqm.c
index e4d1b8b..572582e 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_cqm.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_cqm.c
@@ -978,7 +978,12 @@ static void intel_cqm_event_start(struct perf_event *event, int mode)
 		WARN_ON_ONCE(state->rmid);
 
 	state->rmid = rmid;
-	wrmsrl(MSR_IA32_PQR_ASSOC, state->rmid);
+	/*
+	 * This is actually wrong, as the upper 32 bit MSR contain the
+	 * closid which is used for configuring the Cache Allocation
+	 * Technology component.
+	 */
+	wrmsr(MSR_IA32_PQR_ASSOC, rmid, 0);
 
 	raw_spin_unlock_irqrestore(&state->lock, flags);
 }
@@ -998,7 +1003,13 @@ static void intel_cqm_event_stop(struct perf_event *event, int mode)
 
 	if (!--state->cnt) {
 		state->rmid = 0;
-		wrmsrl(MSR_IA32_PQR_ASSOC, 0);
+		/*
+		 * This is actually wrong, as the upper 32 bit of the
+		 * MSR contain the closid which is used for
+		 * configuring the Cache Allocation Technology
+		 * component.
+		 */
+		wrmsr(MSR_IA32_PQR_ASSOC, 0, 0);
 	} else {
 		WARN_ON_ONCE(!state->rmid);
 	}

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [tip:perf/core] perf/x86/intel/cqm: Use proper data types
  2015-05-19  0:00 ` [patch 2/6] x86, perf, cqm: Use proper data type Thomas Gleixner
  2015-05-19  8:58   ` Matt Fleming
@ 2015-05-27 10:03   ` tip-bot for Thomas Gleixner
  1 sibling, 0 replies; 24+ messages in thread
From: tip-bot for Thomas Gleixner @ 2015-05-27 10:03 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: mingo, peterz, matt.fleming, linux-kernel, vikas.shivappa,
	torvalds, tglx, kanaka.d.juvva, will.auld, hpa

Commit-ID:  b3df4ec4424f27e55d754cfe586195fecca1c4e4
Gitweb:     http://git.kernel.org/tip/b3df4ec4424f27e55d754cfe586195fecca1c4e4
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Tue, 19 May 2015 00:00:51 +0000
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 27 May 2015 09:17:39 +0200

perf/x86/intel/cqm: Use proper data types

'int' is really not a proper data type for an MSR. Use u32 to make it
clear that we are dealing with a 32-bit unsigned hardware value.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Matt Fleming <matt.fleming@intel.com>
Cc: Kanaka Juvva <kanaka.d.juvva@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Vikas Shivappa <vikas.shivappa@linux.intel.com>
Cc: Will Auld <will.auld@intel.com>
Link: http://lkml.kernel.org/r/20150518235149.919350144@linutronix.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/kernel/cpu/perf_event_intel_cqm.c | 4 ++--
 include/linux/perf_event.h                 | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel_cqm.c b/arch/x86/kernel/cpu/perf_event_intel_cqm.c
index 572582e..3e9a7fb 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_cqm.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_cqm.c
@@ -18,7 +18,7 @@ static unsigned int cqm_l3_scale; /* supposedly cacheline size */
 
 struct intel_cqm_state {
 	raw_spinlock_t		lock;
-	int			rmid;
+	u32			rmid;
 	int			cnt;
 };
 
@@ -962,7 +962,7 @@ out:
 static void intel_cqm_event_start(struct perf_event *event, int mode)
 {
 	struct intel_cqm_state *state = this_cpu_ptr(&cqm_state);
-	unsigned int rmid = event->hw.cqm_rmid;
+	u32 rmid = event->hw.cqm_rmid;
 	unsigned long flags;
 
 	if (!(event->hw.cqm_state & PERF_HES_STOPPED))
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 248f782..0658002 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -120,7 +120,7 @@ struct hw_perf_event {
 		};
 		struct { /* intel_cqm */
 			int			cqm_state;
-			int			cqm_rmid;
+			u32			cqm_rmid;
 			struct list_head	cqm_events_entry;
 			struct list_head	cqm_groups_entry;
 			struct list_head	cqm_group_entry;

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [tip:perf/core] perf/x86/intel/cqm: Remove pointless spinlock from state cache
  2015-05-19  0:00 ` [patch 3/6] x86, perf, cqm: Remove pointless spinlock from state cache Thomas Gleixner
  2015-05-19  9:13   ` Matt Fleming
@ 2015-05-27 10:03   ` tip-bot for Thomas Gleixner
  2015-06-05 18:13   ` [patch 3/6] x86, perf, cqm: " Juvva, Kanaka D
  2 siblings, 0 replies; 24+ messages in thread
From: tip-bot for Thomas Gleixner @ 2015-05-27 10:03 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: matt.fleming, kanaka.d.juvva, tglx, peterz, linux-kernel, hpa,
	will.auld, vikas.shivappa, mingo, torvalds

Commit-ID:  9e7eaac95af6c1aecaf558b8c7a1757d5f2d2ad7
Gitweb:     http://git.kernel.org/tip/9e7eaac95af6c1aecaf558b8c7a1757d5f2d2ad7
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Tue, 19 May 2015 00:00:53 +0000
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 27 May 2015 09:17:39 +0200

perf/x86/intel/cqm: Remove pointless spinlock from state cache

'struct intel_cqm_state' is a strict per CPU cache of the rmid and the
usage counter. It can never be modified from a remote CPU.

The three functions which modify the content: intel_cqm_event[start|stop|del]
(del maps to stop) are called from the perf core with interrupts disabled
which is enough protection for the per CPU state values.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Matt Fleming <matt.fleming@intel.com>
Cc: Kanaka Juvva <kanaka.d.juvva@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Vikas Shivappa <vikas.shivappa@linux.intel.com>
Cc: Will Auld <will.auld@intel.com>
Link: http://lkml.kernel.org/r/20150518235150.001006529@linutronix.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/kernel/cpu/perf_event_intel_cqm.c | 17 ++++++-----------
 1 file changed, 6 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel_cqm.c b/arch/x86/kernel/cpu/perf_event_intel_cqm.c
index 3e9a7fb..63391f8 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_cqm.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_cqm.c
@@ -17,11 +17,16 @@ static unsigned int cqm_max_rmid = -1;
 static unsigned int cqm_l3_scale; /* supposedly cacheline size */
 
 struct intel_cqm_state {
-	raw_spinlock_t		lock;
 	u32			rmid;
 	int			cnt;
 };
 
+/*
+ * The cached intel_cqm_state is strictly per CPU and can never be
+ * updated from a remote CPU. Both functions which modify the state
+ * (intel_cqm_event_start and intel_cqm_event_stop) are called with
+ * interrupts disabled, which is sufficient for the protection.
+ */
 static DEFINE_PER_CPU(struct intel_cqm_state, cqm_state);
 
 /*
@@ -963,15 +968,12 @@ static void intel_cqm_event_start(struct perf_event *event, int mode)
 {
 	struct intel_cqm_state *state = this_cpu_ptr(&cqm_state);
 	u32 rmid = event->hw.cqm_rmid;
-	unsigned long flags;
 
 	if (!(event->hw.cqm_state & PERF_HES_STOPPED))
 		return;
 
 	event->hw.cqm_state &= ~PERF_HES_STOPPED;
 
-	raw_spin_lock_irqsave(&state->lock, flags);
-
 	if (state->cnt++)
 		WARN_ON_ONCE(state->rmid != rmid);
 	else
@@ -984,21 +986,17 @@ static void intel_cqm_event_start(struct perf_event *event, int mode)
 	 * Technology component.
 	 */
 	wrmsr(MSR_IA32_PQR_ASSOC, rmid, 0);
-
-	raw_spin_unlock_irqrestore(&state->lock, flags);
 }
 
 static void intel_cqm_event_stop(struct perf_event *event, int mode)
 {
 	struct intel_cqm_state *state = this_cpu_ptr(&cqm_state);
-	unsigned long flags;
 
 	if (event->hw.cqm_state & PERF_HES_STOPPED)
 		return;
 
 	event->hw.cqm_state |= PERF_HES_STOPPED;
 
-	raw_spin_lock_irqsave(&state->lock, flags);
 	intel_cqm_event_read(event);
 
 	if (!--state->cnt) {
@@ -1013,8 +1011,6 @@ static void intel_cqm_event_stop(struct perf_event *event, int mode)
 	} else {
 		WARN_ON_ONCE(!state->rmid);
 	}
-
-	raw_spin_unlock_irqrestore(&state->lock, flags);
 }
 
 static int intel_cqm_event_add(struct perf_event *event, int mode)
@@ -1257,7 +1253,6 @@ static void intel_cqm_cpu_prepare(unsigned int cpu)
 	struct intel_cqm_state *state = &per_cpu(cqm_state, cpu);
 	struct cpuinfo_x86 *c = &cpu_data(cpu);
 
-	raw_spin_lock_init(&state->lock);
 	state->rmid = 0;
 	state->cnt  = 0;
 

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [tip:perf/core] perf/x86/intel/cqm: Avoid pointless MSR write
  2015-05-19  0:00 ` [patch 4/6] x86, perf, cqm: Avoid pointless msr write Thomas Gleixner
  2015-05-19  9:17   ` Matt Fleming
@ 2015-05-27 10:03   ` tip-bot for Thomas Gleixner
  1 sibling, 0 replies; 24+ messages in thread
From: tip-bot for Thomas Gleixner @ 2015-05-27 10:03 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: peterz, hpa, torvalds, linux-kernel, matt.fleming, will.auld,
	vikas.shivappa, tglx, mingo, kanaka.d.juvva

Commit-ID:  0bac237845e203dd1439cfc571b1baf1b2274b3b
Gitweb:     http://git.kernel.org/tip/0bac237845e203dd1439cfc571b1baf1b2274b3b
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Tue, 19 May 2015 00:00:55 +0000
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 27 May 2015 09:17:40 +0200

perf/x86/intel/cqm: Avoid pointless MSR write

If the usage counter is non-zero there is no point to update the rmid
in the PQR MSR.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Matt Fleming <matt.fleming@intel.com>
Cc: Kanaka Juvva <kanaka.d.juvva@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Vikas Shivappa <vikas.shivappa@linux.intel.com>
Cc: Will Auld <will.auld@intel.com>
Link: http://lkml.kernel.org/r/20150518235150.080844281@linutronix.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/kernel/cpu/perf_event_intel_cqm.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel_cqm.c b/arch/x86/kernel/cpu/perf_event_intel_cqm.c
index 63391f8..2ce69c0 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_cqm.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_cqm.c
@@ -974,10 +974,12 @@ static void intel_cqm_event_start(struct perf_event *event, int mode)
 
 	event->hw.cqm_state &= ~PERF_HES_STOPPED;
 
-	if (state->cnt++)
-		WARN_ON_ONCE(state->rmid != rmid);
-	else
+	if (state->cnt++) {
+		if (!WARN_ON_ONCE(state->rmid != rmid))
+			return;
+	} else {
 		WARN_ON_ONCE(state->rmid);
+	}
 
 	state->rmid = rmid;
 	/*

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [tip:perf/core] perf/x86/intel/cqm: Remove useless wrapper function
  2015-05-19  0:00 ` [patch 5/6] x86, perf, cqm: Remove useless wrapper function Thomas Gleixner
  2015-05-19  9:18   ` Matt Fleming
@ 2015-05-27 10:04   ` tip-bot for Thomas Gleixner
  1 sibling, 0 replies; 24+ messages in thread
From: tip-bot for Thomas Gleixner @ 2015-05-27 10:04 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: torvalds, kanaka.d.juvva, linux-kernel, vikas.shivappa,
	matt.fleming, tglx, mingo, peterz, hpa, will.auld

Commit-ID:  43d0c2f6dcd07ffc0de658a7fbeeb63c806e9caa
Gitweb:     http://git.kernel.org/tip/43d0c2f6dcd07ffc0de658a7fbeeb63c806e9caa
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Tue, 19 May 2015 00:00:56 +0000
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 27 May 2015 09:17:40 +0200

perf/x86/intel/cqm: Remove useless wrapper function

intel_cqm_event_del() is a 1:1 wrapper for intel_cqm_event_stop().
Remove the useless indirection.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Matt Fleming <matt.fleming@intel.com>
Cc: Kanaka Juvva <kanaka.d.juvva@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Vikas Shivappa <vikas.shivappa@linux.intel.com>
Cc: Will Auld <will.auld@intel.com>
Link: http://lkml.kernel.org/r/20150518235150.159779847@linutronix.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/kernel/cpu/perf_event_intel_cqm.c | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel_cqm.c b/arch/x86/kernel/cpu/perf_event_intel_cqm.c
index 2ce69c0..8241b64 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_cqm.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_cqm.c
@@ -1033,11 +1033,6 @@ static int intel_cqm_event_add(struct perf_event *event, int mode)
 	return 0;
 }
 
-static void intel_cqm_event_del(struct perf_event *event, int mode)
-{
-	intel_cqm_event_stop(event, mode);
-}
-
 static void intel_cqm_event_destroy(struct perf_event *event)
 {
 	struct perf_event *group_other = NULL;
@@ -1230,7 +1225,7 @@ static struct pmu intel_cqm_pmu = {
 	.task_ctx_nr	     = perf_sw_context,
 	.event_init	     = intel_cqm_event_init,
 	.add		     = intel_cqm_event_add,
-	.del		     = intel_cqm_event_del,
+	.del		     = intel_cqm_event_stop,
 	.start		     = intel_cqm_event_start,
 	.stop		     = intel_cqm_event_stop,
 	.read		     = intel_cqm_event_read,

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [tip:perf/core] perf/x86/intel/cqm: Add storage for 'closid' and clean up 'struct intel_pqr_state'
  2015-05-19  0:00 ` [patch 6/6] x86, perf, cqm: Add storage for closid and cleanup struct intel_pqr_state Thomas Gleixner
  2015-05-19 11:54   ` Matt Fleming
@ 2015-05-27 10:04   ` tip-bot for Thomas Gleixner
  1 sibling, 0 replies; 24+ messages in thread
From: tip-bot for Thomas Gleixner @ 2015-05-27 10:04 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: tglx, linux-kernel, peterz, torvalds, kanaka.d.juvva,
	vikas.shivappa, will.auld, hpa, matt.fleming, mingo

Commit-ID:  bf926731e1585ccad029ca2fad1444fee082b78d
Gitweb:     http://git.kernel.org/tip/bf926731e1585ccad029ca2fad1444fee082b78d
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Tue, 19 May 2015 00:00:58 +0000
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 27 May 2015 09:17:41 +0200

perf/x86/intel/cqm: Add storage for 'closid' and clean up 'struct intel_pqr_state'

'closid' (CLass Of Service ID) is used for the Class based Cache
Allocation Technology (CAT). Add explicit storage to the per cpu cache
for it, so it can be used later with the CAT support (requires to move
the per cpu data).

While at it:

 - Rename the structure to intel_pqr_state which reflects the actual
   purpose of the struct: cache values which go into the PQR MSR

 - Rename 'cnt' to rmid_usecnt which reflects the actual purpose of
   the counter.

 - Document the structure and the struct members.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Matt Fleming <matt.fleming@intel.com>
Cc: Kanaka Juvva <kanaka.d.juvva@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Vikas Shivappa <vikas.shivappa@linux.intel.com>
Cc: Will Auld <will.auld@intel.com>
Link: http://lkml.kernel.org/r/20150518235150.240899319@linutronix.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/kernel/cpu/perf_event_intel_cqm.c | 50 ++++++++++++++++--------------
 1 file changed, 27 insertions(+), 23 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel_cqm.c b/arch/x86/kernel/cpu/perf_event_intel_cqm.c
index 8241b64..8233b29 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_cqm.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_cqm.c
@@ -16,18 +16,32 @@
 static unsigned int cqm_max_rmid = -1;
 static unsigned int cqm_l3_scale; /* supposedly cacheline size */
 
-struct intel_cqm_state {
+/**
+ * struct intel_pqr_state - State cache for the PQR MSR
+ * @rmid:		The cached Resource Monitoring ID
+ * @closid:		The cached Class Of Service ID
+ * @rmid_usecnt:	The usage counter for rmid
+ *
+ * The upper 32 bits of MSR_IA32_PQR_ASSOC contain closid and the
+ * lower 10 bits rmid. The update to MSR_IA32_PQR_ASSOC always
+ * contains both parts, so we need to cache them.
+ *
+ * The cache also helps to avoid pointless updates if the value does
+ * not change.
+ */
+struct intel_pqr_state {
 	u32			rmid;
-	int			cnt;
+	u32			closid;
+	int			rmid_usecnt;
 };
 
 /*
- * The cached intel_cqm_state is strictly per CPU and can never be
+ * The cached intel_pqr_state is strictly per CPU and can never be
  * updated from a remote CPU. Both functions which modify the state
  * (intel_cqm_event_start and intel_cqm_event_stop) are called with
  * interrupts disabled, which is sufficient for the protection.
  */
-static DEFINE_PER_CPU(struct intel_cqm_state, cqm_state);
+static DEFINE_PER_CPU(struct intel_pqr_state, pqr_state);
 
 /*
  * Protects cache_cgroups and cqm_rmid_free_lru and cqm_rmid_limbo_lru.
@@ -966,7 +980,7 @@ out:
 
 static void intel_cqm_event_start(struct perf_event *event, int mode)
 {
-	struct intel_cqm_state *state = this_cpu_ptr(&cqm_state);
+	struct intel_pqr_state *state = this_cpu_ptr(&pqr_state);
 	u32 rmid = event->hw.cqm_rmid;
 
 	if (!(event->hw.cqm_state & PERF_HES_STOPPED))
@@ -974,7 +988,7 @@ static void intel_cqm_event_start(struct perf_event *event, int mode)
 
 	event->hw.cqm_state &= ~PERF_HES_STOPPED;
 
-	if (state->cnt++) {
+	if (state->rmid_usecnt++) {
 		if (!WARN_ON_ONCE(state->rmid != rmid))
 			return;
 	} else {
@@ -982,17 +996,12 @@ static void intel_cqm_event_start(struct perf_event *event, int mode)
 	}
 
 	state->rmid = rmid;
-	/*
-	 * This is actually wrong, as the upper 32 bit MSR contain the
-	 * closid which is used for configuring the Cache Allocation
-	 * Technology component.
-	 */
-	wrmsr(MSR_IA32_PQR_ASSOC, rmid, 0);
+	wrmsr(MSR_IA32_PQR_ASSOC, rmid, state->closid);
 }
 
 static void intel_cqm_event_stop(struct perf_event *event, int mode)
 {
-	struct intel_cqm_state *state = this_cpu_ptr(&cqm_state);
+	struct intel_pqr_state *state = this_cpu_ptr(&pqr_state);
 
 	if (event->hw.cqm_state & PERF_HES_STOPPED)
 		return;
@@ -1001,15 +1010,9 @@ static void intel_cqm_event_stop(struct perf_event *event, int mode)
 
 	intel_cqm_event_read(event);
 
-	if (!--state->cnt) {
+	if (!--state->rmid_usecnt) {
 		state->rmid = 0;
-		/*
-		 * This is actually wrong, as the upper 32 bit of the
-		 * MSR contain the closid which is used for
-		 * configuring the Cache Allocation Technology
-		 * component.
-		 */
-		wrmsr(MSR_IA32_PQR_ASSOC, 0, 0);
+		wrmsr(MSR_IA32_PQR_ASSOC, 0, state->closid);
 	} else {
 		WARN_ON_ONCE(!state->rmid);
 	}
@@ -1247,11 +1250,12 @@ static inline void cqm_pick_event_reader(int cpu)
 
 static void intel_cqm_cpu_prepare(unsigned int cpu)
 {
-	struct intel_cqm_state *state = &per_cpu(cqm_state, cpu);
+	struct intel_pqr_state *state = &per_cpu(pqr_state, cpu);
 	struct cpuinfo_x86 *c = &cpu_data(cpu);
 
 	state->rmid = 0;
-	state->cnt  = 0;
+	state->closid = 0;
+	state->rmid_usecnt = 0;
 
 	WARN_ON(c->x86_cache_max_rmid != cqm_max_rmid);
 	WARN_ON(c->x86_cache_occ_scale != cqm_l3_scale);

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* RE: [patch 3/6] x86, perf, cqm: Remove pointless spinlock from state cache
  2015-05-19  0:00 ` [patch 3/6] x86, perf, cqm: Remove pointless spinlock from state cache Thomas Gleixner
  2015-05-19  9:13   ` Matt Fleming
  2015-05-27 10:03   ` [tip:perf/core] perf/x86/intel/cqm: " tip-bot for Thomas Gleixner
@ 2015-06-05 18:13   ` Juvva, Kanaka D
  2 siblings, 0 replies; 24+ messages in thread
From: Juvva, Kanaka D @ 2015-06-05 18:13 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: Peter Zijlstra, Vikas Shivappa, x86, Fleming, Matt, Auld, Will

Tested-by: Kanaka Juvva<kanaka.d.juvva@intel.com>

> -----Original Message-----
> From: Thomas Gleixner [mailto:tglx@linutronix.de]
> Sent: Monday, May 18, 2015 5:01 PM
> To: LKML
> Cc: Peter Zijlstra; Vikas Shivappa; x86@kernel.org; Fleming, Matt; Auld, Will;
> Juvva, Kanaka D
> Subject: [patch 3/6] x86, perf, cqm: Remove pointless spinlock from state cache
> 
> struct intel_cqm_state is a strict per cpu cache of the rmid and the usage
> counter. It can never be modified from a remote cpu.
> 
> The 3 functions which modify the content: start, stop and del (del maps to stop)
> are called from the perf core with interrupts disabled which is enough protection
> for the per cpu state values.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/kernel/cpu/perf_event_intel_cqm.c |   17 ++++++-----------
>  1 file changed, 6 insertions(+), 11 deletions(-)
> 
> Index: linux/arch/x86/kernel/cpu/perf_event_intel_cqm.c
> =================================================================
> ==
> --- linux.orig/arch/x86/kernel/cpu/perf_event_intel_cqm.c
> +++ linux/arch/x86/kernel/cpu/perf_event_intel_cqm.c
> @@ -17,11 +17,16 @@ static unsigned int cqm_max_rmid = -1;  static unsigned
> int cqm_l3_scale; /* supposedly cacheline size */
> 
>  struct intel_cqm_state {
> -	raw_spinlock_t		lock;
>  	u32			rmid;
>  	int			cnt;
>  };
> 
> +/*
> + * The cached intel_cqm_state is strictly per cpu and can never be
> + * updated from a remote cpu. Both functions which modify the state
> + * (intel_cqm_event_start and intel_cqm_event_stop) are called with
> + * interrupts disabled, which is sufficient for the protection.
> + */
>  static DEFINE_PER_CPU(struct intel_cqm_state, cqm_state);
> 
>  /*
> @@ -963,15 +968,12 @@ static void intel_cqm_event_start(struct  {
>  	struct intel_cqm_state *state = this_cpu_ptr(&cqm_state);
>  	u32 rmid = event->hw.cqm_rmid;
> -	unsigned long flags;
> 
>  	if (!(event->hw.cqm_state & PERF_HES_STOPPED))
>  		return;
> 
>  	event->hw.cqm_state &= ~PERF_HES_STOPPED;
> 
> -	raw_spin_lock_irqsave(&state->lock, flags);
> -
>  	if (state->cnt++)
>  		WARN_ON_ONCE(state->rmid != rmid);
>  	else
> @@ -984,21 +986,17 @@ static void intel_cqm_event_start(struct
>  	 * Technology component.
>  	 */
>  	wrmsr(MSR_IA32_PQR_ASSOC, rmid, 0);
> -
> -	raw_spin_unlock_irqrestore(&state->lock, flags);
>  }
> 
>  static void intel_cqm_event_stop(struct perf_event *event, int mode)  {
>  	struct intel_cqm_state *state = this_cpu_ptr(&cqm_state);
> -	unsigned long flags;
> 
>  	if (event->hw.cqm_state & PERF_HES_STOPPED)
>  		return;
> 
>  	event->hw.cqm_state |= PERF_HES_STOPPED;
> 
> -	raw_spin_lock_irqsave(&state->lock, flags);
>  	intel_cqm_event_read(event);
> 
>  	if (!--state->cnt) {
> @@ -1013,8 +1011,6 @@ static void intel_cqm_event_stop(struct
>  	} else {
>  		WARN_ON_ONCE(!state->rmid);
>  	}
> -
> -	raw_spin_unlock_irqrestore(&state->lock, flags);
>  }
> 
>  static int intel_cqm_event_add(struct perf_event *event, int mode) @@ -
> 1257,7 +1253,6 @@ static void intel_cqm_cpu_prepare(unsign
>  	struct intel_cqm_state *state = &per_cpu(cqm_state, cpu);
>  	struct cpuinfo_x86 *c = &cpu_data(cpu);
> 
> -	raw_spin_lock_init(&state->lock);
>  	state->rmid = 0;
>  	state->cnt  = 0;
> 
> 


^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2015-06-05 18:13 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-19  0:00 [patch 0/6] x86, perf, cqm: Cleanups and preparation for RDT/CAT Thomas Gleixner
2015-05-19  0:00 ` [patch 1/6] x86, perf, cqm: Document PQR MSR abuse Thomas Gleixner
2015-05-19 11:53   ` Matt Fleming
2015-05-27 10:02   ` [tip:perf/core] perf/x86/intel/cqm: " tip-bot for Thomas Gleixner
2015-05-19  0:00 ` [patch 2/6] x86, perf, cqm: Use proper data type Thomas Gleixner
2015-05-19  8:58   ` Matt Fleming
2015-05-19 13:03     ` Thomas Gleixner
2015-05-27 10:03   ` [tip:perf/core] perf/x86/intel/cqm: Use proper data types tip-bot for Thomas Gleixner
2015-05-19  0:00 ` [patch 3/6] x86, perf, cqm: Remove pointless spinlock from state cache Thomas Gleixner
2015-05-19  9:13   ` Matt Fleming
2015-05-19 10:51     ` Peter Zijlstra
2015-05-27 10:03   ` [tip:perf/core] perf/x86/intel/cqm: " tip-bot for Thomas Gleixner
2015-06-05 18:13   ` [patch 3/6] x86, perf, cqm: " Juvva, Kanaka D
2015-05-19  0:00 ` [patch 4/6] x86, perf, cqm: Avoid pointless msr write Thomas Gleixner
2015-05-19  9:17   ` Matt Fleming
2015-05-27 10:03   ` [tip:perf/core] perf/x86/intel/cqm: Avoid pointless MSR write tip-bot for Thomas Gleixner
2015-05-19  0:00 ` [patch 5/6] x86, perf, cqm: Remove useless wrapper function Thomas Gleixner
2015-05-19  9:18   ` Matt Fleming
2015-05-27 10:04   ` [tip:perf/core] perf/x86/intel/cqm: " tip-bot for Thomas Gleixner
2015-05-19  0:00 ` [patch 6/6] x86, perf, cqm: Add storage for closid and cleanup struct intel_pqr_state Thomas Gleixner
2015-05-19 11:54   ` Matt Fleming
2015-05-19 12:59     ` Thomas Gleixner
2015-05-27 10:04   ` [tip:perf/core] perf/x86/intel/cqm: Add storage for 'closid' and clean up 'struct intel_pqr_state' tip-bot for Thomas Gleixner
2015-05-19  7:42 ` [patch 0/6] x86, perf, cqm: Cleanups and preparation for RDT/CAT Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).