All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V5 1/8] perf/x86/intel/uncore: customized event_read for client IMC uncore
@ 2018-01-15 18:57 kan.liang
  2018-01-15 18:57 ` [PATCH V5 2/8] perf/x86/intel/uncore: correct fixed counter index check for NHM kan.liang
                   ` (7 more replies)
  0 siblings, 8 replies; 27+ messages in thread
From: kan.liang @ 2018-01-15 18:57 UTC (permalink / raw)
  To: tglx, mingo, peterz, linux-kernel; +Cc: acme, eranian, ak, Kan Liang

From: Kan Liang <Kan.liang@intel.com>

There are two free running counters for client IMC uncore. The custom
event_init() function hardcode their index to 'UNCORE_PMC_IDX_FIXED' and
'UNCORE_PMC_IDX_FIXED + 1'. To support the 'UNCORE_PMC_IDX_FIXED + 1'
case, the generic uncore_perf_event_update is obscurely hacked.
The code quality issue will bring problem when new counter index is
introduced into generic code. For example, free running counter index.

Introduce customized event_read function for client IMC uncore.
The customized function is exactly copied from previous generic
uncore_pmu_event_read.
The 'UNCORE_PMC_IDX_FIXED + 1' case will be isolated for client IMC
uncore only.

Signed-off-by: Kan Liang <Kan.liang@intel.com>
---

No changes since V4

 arch/x86/events/intel/uncore_snb.c | 33 +++++++++++++++++++++++++++++++--
 1 file changed, 31 insertions(+), 2 deletions(-)

diff --git a/arch/x86/events/intel/uncore_snb.c b/arch/x86/events/intel/uncore_snb.c
index aee5e84..df53521 100644
--- a/arch/x86/events/intel/uncore_snb.c
+++ b/arch/x86/events/intel/uncore_snb.c
@@ -450,6 +450,35 @@ static void snb_uncore_imc_event_start(struct perf_event *event, int flags)
 		uncore_pmu_start_hrtimer(box);
 }
 
+static void snb_uncore_imc_event_read(struct perf_event *event)
+{
+	struct intel_uncore_box *box = uncore_event_to_box(event);
+	u64 prev_count, new_count, delta;
+	int shift;
+
+	/*
+	 * There are two free running counters in IMC.
+	 * The index for the second one is hardcoded to
+	 * UNCORE_PMC_IDX_FIXED + 1.
+	 */
+	if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
+		shift = 64 - uncore_fixed_ctr_bits(box);
+	else
+		shift = 64 - uncore_perf_ctr_bits(box);
+
+	/* the hrtimer might modify the previous event value */
+again:
+	prev_count = local64_read(&event->hw.prev_count);
+	new_count = uncore_read_counter(box, event);
+	if (local64_xchg(&event->hw.prev_count, new_count) != prev_count)
+		goto again;
+
+	delta = (new_count << shift) - (prev_count << shift);
+	delta >>= shift;
+
+	local64_add(delta, &event->count);
+}
+
 static void snb_uncore_imc_event_stop(struct perf_event *event, int flags)
 {
 	struct intel_uncore_box *box = uncore_event_to_box(event);
@@ -472,7 +501,7 @@ static void snb_uncore_imc_event_stop(struct perf_event *event, int flags)
 		 * Drain the remaining delta count out of a event
 		 * that we are disabling:
 		 */
-		uncore_perf_event_update(box, event);
+		snb_uncore_imc_event_read(event);
 		hwc->state |= PERF_HES_UPTODATE;
 	}
 }
@@ -534,7 +563,7 @@ static struct pmu snb_uncore_imc_pmu = {
 	.del		= snb_uncore_imc_event_del,
 	.start		= snb_uncore_imc_event_start,
 	.stop		= snb_uncore_imc_event_stop,
-	.read		= uncore_pmu_event_read,
+	.read		= snb_uncore_imc_event_read,
 };
 
 static struct intel_uncore_ops snb_uncore_imc_ops = {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH V5 2/8] perf/x86/intel/uncore: correct fixed counter index check for NHM
  2018-01-15 18:57 [PATCH V5 1/8] perf/x86/intel/uncore: customized event_read for client IMC uncore kan.liang
@ 2018-01-15 18:57 ` kan.liang
  2018-01-15 18:57 ` [PATCH V5 3/8] perf/x86/intel/uncore: correct fixed counter index check in generic code kan.liang
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 27+ messages in thread
From: kan.liang @ 2018-01-15 18:57 UTC (permalink / raw)
  To: tglx, mingo, peterz, linux-kernel; +Cc: acme, eranian, ak, Kan Liang

From: Kan Liang <Kan.liang@intel.com>

For Nehalem and Westmere, there is only one fixed counter for W-Box.
There is no index which is bigger than UNCORE_PMC_IDX_FIXED.
It is not correct to use >= to check fixed counter.
The code quality issue will bring problem when new counter index is
introduced.

Signed-off-by: Kan Liang <Kan.liang@intel.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---

Change since V4:
 - Add reviewed-by

 arch/x86/events/intel/uncore_nhmex.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/events/intel/uncore_nhmex.c b/arch/x86/events/intel/uncore_nhmex.c
index 93e7a83..173e267 100644
--- a/arch/x86/events/intel/uncore_nhmex.c
+++ b/arch/x86/events/intel/uncore_nhmex.c
@@ -246,7 +246,7 @@ static void nhmex_uncore_msr_enable_event(struct intel_uncore_box *box, struct p
 {
 	struct hw_perf_event *hwc = &event->hw;
 
-	if (hwc->idx >= UNCORE_PMC_IDX_FIXED)
+	if (hwc->idx == UNCORE_PMC_IDX_FIXED)
 		wrmsrl(hwc->config_base, NHMEX_PMON_CTL_EN_BIT0);
 	else if (box->pmu->type->event_mask & NHMEX_PMON_CTL_EN_BIT0)
 		wrmsrl(hwc->config_base, hwc->config | NHMEX_PMON_CTL_EN_BIT22);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH V5 3/8] perf/x86/intel/uncore: correct fixed counter index check in generic code
  2018-01-15 18:57 [PATCH V5 1/8] perf/x86/intel/uncore: customized event_read for client IMC uncore kan.liang
  2018-01-15 18:57 ` [PATCH V5 2/8] perf/x86/intel/uncore: correct fixed counter index check for NHM kan.liang
@ 2018-01-15 18:57 ` kan.liang
  2018-01-15 18:57 ` [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters kan.liang
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 27+ messages in thread
From: kan.liang @ 2018-01-15 18:57 UTC (permalink / raw)
  To: tglx, mingo, peterz, linux-kernel; +Cc: acme, eranian, ak, Kan Liang

From: Kan Liang <Kan.liang@intel.com>

There is no index which is bigger than UNCORE_PMC_IDX_FIXED. The only
exception is client IMC uncore. It has customized function to deal with
the 'UNCORE_PMC_IDX_FIXED + 1' case. It does not touch the generic code.
For generic code, it is not correct to use >= to check fixed counter.
The code quality issue will bring problem when new counter index is
introduced.

Signed-off-by: Kan Liang <Kan.liang@intel.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---

Change since V4:
 - Add reviewed-by

 arch/x86/events/intel/uncore.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
index 7874c98..603bf11 100644
--- a/arch/x86/events/intel/uncore.c
+++ b/arch/x86/events/intel/uncore.c
@@ -218,7 +218,7 @@ void uncore_perf_event_update(struct intel_uncore_box *box, struct perf_event *e
 	u64 prev_count, new_count, delta;
 	int shift;
 
-	if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
+	if (event->hw.idx == UNCORE_PMC_IDX_FIXED)
 		shift = 64 - uncore_fixed_ctr_bits(box);
 	else
 		shift = 64 - uncore_perf_ctr_bits(box);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters
  2018-01-15 18:57 [PATCH V5 1/8] perf/x86/intel/uncore: customized event_read for client IMC uncore kan.liang
  2018-01-15 18:57 ` [PATCH V5 2/8] perf/x86/intel/uncore: correct fixed counter index check for NHM kan.liang
  2018-01-15 18:57 ` [PATCH V5 3/8] perf/x86/intel/uncore: correct fixed counter index check in generic code kan.liang
@ 2018-01-15 18:57 ` kan.liang
  2018-01-18 13:32   ` Peter Zijlstra
  2018-01-15 18:57 ` [PATCH V5 5/8] perf/x86/intel/uncore: add infrastructure for free running counter kan.liang
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 27+ messages in thread
From: kan.liang @ 2018-01-15 18:57 UTC (permalink / raw)
  To: tglx, mingo, peterz, linux-kernel; +Cc: acme, eranian, ak, Kan Liang

From: Kan Liang <Kan.liang@intel.com>

There are a number of free running counters introduced for uncore, which
provide highly valuable information to a wide array of customers.
For example, Skylake Server has IIO free running counters to collect
Input/Output x BW/Utilization.
The precious generic counters could be saved to collect other customer
interested data.

The free running counter is read-only and always active. Current generic
uncore code does not support this kind of counters.

Introduce a new index to indicate the free running counters. Only one
index is enough for all free running counters. Because the free running
countes are always active, and the event and free running counter are
always 1:1 mapped. It does not need extra index to indicate the assigned
counter.

Introduce some rules to encode the event for free running counters.
- The event for free running counter has the same event code 0xff as the
  event for fixed counter.
- The umask of the event starts from 0x10. The umask which is less than
  0x10 is reserved for the event of fixed counter.
- The free running counters can be divided into different types
  according to the MSR location, bit width or definition. The start
  point of the umask for different type has 0x10 offset.
For example, there are three types of IIO free running counters on
Skylake server, IO CLOCKS counters, BANDWIDTH counters and UTILIZATION
counters.
The event code for all free running counters is 0xff.
'ioclk' is the first counter of IO CLOCKS. IO CLOCKS is the first type
of free running counters, which umask starts from 0x10.
So 'ioclk' is encoded as event=0xff,umask=0x10
'bw_in_port2' is the third counter of BANDWIDTH counters. BANDWIDTH is
the second type which umask starts from 0x20.
So 'bw_in_port2' is encoded as event=0xff,umask=0x22.

Introduce a new data structure to store free running counters related
information for each type. It includes the number of counters, bit
width, base address, offset between counters and offset between boxes.

Introduce several inline helpers to check index for fixed counter and
free running counter, validate free running counter event, and retrieve
the free running counter information according to box and event.

Signed-off-by: Kan Liang <Kan.liang@intel.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---

Change since V4:
 - Add reviewed-by
 - Add space between define and function declaration

 arch/x86/events/intel/uncore.h | 119 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 118 insertions(+), 1 deletion(-)

diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
index 414dc7e..280a242 100644
--- a/arch/x86/events/intel/uncore.h
+++ b/arch/x86/events/intel/uncore.h
@@ -12,8 +12,13 @@
 
 #define UNCORE_FIXED_EVENT		0xff
 #define UNCORE_PMC_IDX_MAX_GENERIC	8
+#define UNCORE_PMC_IDX_MAX_FIXED	1
+#define UNCORE_PMC_IDX_MAX_FREERUNNING	1
 #define UNCORE_PMC_IDX_FIXED		UNCORE_PMC_IDX_MAX_GENERIC
-#define UNCORE_PMC_IDX_MAX		(UNCORE_PMC_IDX_FIXED + 1)
+#define UNCORE_PMC_IDX_FREERUNNING	(UNCORE_PMC_IDX_FIXED + \
+					UNCORE_PMC_IDX_MAX_FIXED)
+#define UNCORE_PMC_IDX_MAX		(UNCORE_PMC_IDX_FREERUNNING + \
+					UNCORE_PMC_IDX_MAX_FREERUNNING)
 
 #define UNCORE_PCI_DEV_FULL_DATA(dev, func, type, idx)	\
 		((dev << 24) | (func << 16) | (type << 8) | idx)
@@ -35,6 +40,7 @@ struct intel_uncore_ops;
 struct intel_uncore_pmu;
 struct intel_uncore_box;
 struct uncore_event_desc;
+struct freerunning_counters;
 
 struct intel_uncore_type {
 	const char *name;
@@ -42,6 +48,7 @@ struct intel_uncore_type {
 	int num_boxes;
 	int perf_ctr_bits;
 	int fixed_ctr_bits;
+	int num_freerunning_types;
 	unsigned perf_ctr;
 	unsigned event_ctl;
 	unsigned event_mask;
@@ -59,6 +66,7 @@ struct intel_uncore_type {
 	struct intel_uncore_pmu *pmus;
 	struct intel_uncore_ops *ops;
 	struct uncore_event_desc *event_descs;
+	struct freerunning_counters *freerunning;
 	const struct attribute_group *attr_groups[4];
 	struct pmu *pmu; /* for custom pmu ops */
 };
@@ -129,6 +137,14 @@ struct uncore_event_desc {
 	const char *config;
 };
 
+struct freerunning_counters {
+	unsigned int counter_base;
+	unsigned int counter_offset;
+	unsigned int box_offset;
+	unsigned int num_counters;
+	unsigned int bits;
+};
+
 struct pci2phy_map {
 	struct list_head list;
 	int segment;
@@ -157,6 +173,16 @@ static ssize_t __uncore_##_var##_show(struct kobject *kobj,		\
 static struct kobj_attribute format_attr_##_var =			\
 	__ATTR(_name, 0444, __uncore_##_var##_show, NULL)
 
+static inline bool uncore_pmc_fixed(int idx)
+{
+	return idx == UNCORE_PMC_IDX_FIXED;
+}
+
+static inline bool uncore_pmc_freerunning(int idx)
+{
+	return idx == UNCORE_PMC_IDX_FREERUNNING;
+}
+
 static inline unsigned uncore_pci_box_ctl(struct intel_uncore_box *box)
 {
 	return box->pmu->type->box_ctl;
@@ -214,6 +240,56 @@ static inline unsigned uncore_msr_fixed_ctr(struct intel_uncore_box *box)
 	return box->pmu->type->fixed_ctr + uncore_msr_box_offset(box);
 }
 
+
+/*
+ * Free running counter is similar as fixed counter, except it is read-only
+ * and always active when the uncore box is powered up.
+ *
+ * Here are the rules which are used to encode the event for free running
+ * counter.
+ * - The event for free running counter has the same event code 0xff as
+ *   the event for fixed counter.
+ * - The umask of the event starts from 0x10. The umask which is less
+ *   than 0x10 is reserved for the event of fixed counter.
+ * - The free running counters can be divided into different types according
+ *   to the MSR location, bit width or definition. The start point of the
+ *   umask for different type has 0x10 offset.
+ *
+ * For example, there are three types of IIO free running counters on Skylake
+ * server, IO CLOCKS counters, BANDWIDTH counters and UTILIZATION counters.
+ * The event code for all the free running counters is 0xff.
+ * 'ioclk' is the first counter of IO CLOCKS. IO CLOCKS is the first type,
+ * which umask starts from 0x10.
+ * So 'ioclk' is encoded as event=0xff,umask=0x10
+ * 'bw_in_port2' is the third counter of BANDWIDTH counters. BANDWIDTH is
+ * the second type, which umask starts from 0x20.
+ * So 'bw_in_port2' is encoded as event=0xff,umask=0x22
+ */
+static inline unsigned int uncore_freerunning_idx(u64 config)
+{
+	return ((config >> 8) & 0xf);
+}
+
+#define UNCORE_FREERUNNING_UMASK_START		0x10
+
+static inline unsigned int uncore_freerunning_type(u64 config)
+{
+	return ((((config >> 8) - UNCORE_FREERUNNING_UMASK_START) >> 4) & 0xf);
+}
+
+static inline
+unsigned int uncore_freerunning_counter(struct intel_uncore_box *box,
+					struct perf_event *event)
+{
+	unsigned int type = uncore_freerunning_type(event->attr.config);
+	unsigned int idx = uncore_freerunning_idx(event->attr.config);
+	struct intel_uncore_pmu *pmu = box->pmu;
+
+	return pmu->type->freerunning[type].counter_base +
+	       pmu->type->freerunning[type].counter_offset * idx +
+	       pmu->type->freerunning[type].box_offset * pmu->pmu_idx;
+}
+
 static inline
 unsigned uncore_msr_event_ctl(struct intel_uncore_box *box, int idx)
 {
@@ -276,11 +352,52 @@ static inline int uncore_fixed_ctr_bits(struct intel_uncore_box *box)
 	return box->pmu->type->fixed_ctr_bits;
 }
 
+static inline
+unsigned int uncore_freerunning_bits(struct intel_uncore_box *box,
+				     struct perf_event *event)
+{
+	unsigned int type = uncore_freerunning_type(event->attr.config);
+
+	return box->pmu->type->freerunning[type].bits;
+}
+
+static inline int uncore_num_freerunning(struct intel_uncore_box *box,
+					 struct perf_event *event)
+{
+	unsigned int type = uncore_freerunning_type(event->attr.config);
+
+	return box->pmu->type->freerunning[type].num_counters;
+}
+
+static inline int uncore_num_freerunning_types(struct intel_uncore_box *box,
+					       struct perf_event *event)
+{
+	return box->pmu->type->num_freerunning_types;
+}
+
+static inline bool check_valid_freerunning_event(struct intel_uncore_box *box,
+						 struct perf_event *event)
+{
+	unsigned int type = uncore_freerunning_type(event->attr.config);
+	unsigned int idx = uncore_freerunning_idx(event->attr.config);
+
+	return (type < uncore_num_freerunning_types(box, event)) &&
+	       (idx < uncore_num_freerunning(box, event));
+}
+
 static inline int uncore_num_counters(struct intel_uncore_box *box)
 {
 	return box->pmu->type->num_counters;
 }
 
+static inline bool is_freerunning_event(struct perf_event *event)
+{
+	u64 cfg = event->attr.config;
+
+	return ((cfg & UNCORE_FIXED_EVENT) == UNCORE_FIXED_EVENT) &&
+	       (((cfg >> 8) & 0xff) >= UNCORE_FREERUNNING_UMASK_START);
+}
+
 static inline void uncore_disable_box(struct intel_uncore_box *box)
 {
 	if (box->pmu->type->ops->disable_box)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH V5 5/8] perf/x86/intel/uncore: add infrastructure for free running counter
  2018-01-15 18:57 [PATCH V5 1/8] perf/x86/intel/uncore: customized event_read for client IMC uncore kan.liang
                   ` (2 preceding siblings ...)
  2018-01-15 18:57 ` [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters kan.liang
@ 2018-01-15 18:57 ` kan.liang
  2018-01-15 18:57 ` [PATCH V5 6/8] perf/x86/intel/uncore: SKX support for IIO free running counters kan.liang
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 27+ messages in thread
From: kan.liang @ 2018-01-15 18:57 UTC (permalink / raw)
  To: tglx, mingo, peterz, linux-kernel; +Cc: acme, eranian, ak, Kan Liang

From: Kan Liang <Kan.liang@intel.com>

There are a number of free running counters introduced for uncore, which
provide highly valuable information to a wide array of customers.
However, the generic uncore code doesn't support them yet.

The free running counters will be specially handled based on their
unique attributes
 - They are read-only. They cannot be enable/disable.
 - The event and the counter are always 1:1 mapped. It doesn't need to
   be assigned nor tracked by event_list.
 - They are always active. It doesn't need to check the availability.
 - They have different bit width.

Also, using inline helpers to replace the check for fixed counter and
free running counter.

Signed-off-by: Kan Liang <Kan.liang@intel.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---

Change since V4:
 - Add reviewed-by
 - Modify the changelog

 arch/x86/events/intel/uncore.c | 68 +++++++++++++++++++++++++++++++++++++++---
 1 file changed, 64 insertions(+), 4 deletions(-)

diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
index 603bf11..f38a7bb 100644
--- a/arch/x86/events/intel/uncore.c
+++ b/arch/x86/events/intel/uncore.c
@@ -203,7 +203,7 @@ static void uncore_assign_hw_event(struct intel_uncore_box *box,
 	hwc->idx = idx;
 	hwc->last_tag = ++box->tags[idx];
 
-	if (hwc->idx == UNCORE_PMC_IDX_FIXED) {
+	if (uncore_pmc_fixed(hwc->idx)) {
 		hwc->event_base = uncore_fixed_ctr(box);
 		hwc->config_base = uncore_fixed_ctl(box);
 		return;
@@ -218,7 +218,9 @@ void uncore_perf_event_update(struct intel_uncore_box *box, struct perf_event *e
 	u64 prev_count, new_count, delta;
 	int shift;
 
-	if (event->hw.idx == UNCORE_PMC_IDX_FIXED)
+	if (uncore_pmc_freerunning(event->hw.idx))
+		shift = 64 - uncore_freerunning_bits(box, event);
+	else if (uncore_pmc_fixed(event->hw.idx))
 		shift = 64 - uncore_fixed_ctr_bits(box);
 	else
 		shift = 64 - uncore_perf_ctr_bits(box);
@@ -454,10 +456,25 @@ static void uncore_pmu_event_start(struct perf_event *event, int flags)
 	struct intel_uncore_box *box = uncore_event_to_box(event);
 	int idx = event->hw.idx;
 
-	if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
+	if (WARN_ON_ONCE(idx == -1 || idx >= UNCORE_PMC_IDX_MAX))
 		return;
 
-	if (WARN_ON_ONCE(idx == -1 || idx >= UNCORE_PMC_IDX_MAX))
+	/*
+	 * Free running counter is read-only and always active.
+	 * Use the current counter value as start point.
+	 * There is no overflow interrupt for free running counter.
+	 * Use hrtimer to periodically poll the counter to avoid overflow.
+	 */
+	if (uncore_pmc_freerunning(event->hw.idx)) {
+		list_add_tail(&event->active_entry, &box->active_list);
+		local64_set(&event->hw.prev_count,
+			    uncore_read_counter(box, event));
+		if (box->n_active++ == 0)
+			uncore_pmu_start_hrtimer(box);
+		return;
+	}
+
+	if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
 		return;
 
 	event->hw.state = 0;
@@ -479,6 +496,15 @@ static void uncore_pmu_event_stop(struct perf_event *event, int flags)
 	struct intel_uncore_box *box = uncore_event_to_box(event);
 	struct hw_perf_event *hwc = &event->hw;
 
+	/* Cannot disable free running counter which is read-only */
+	if (uncore_pmc_freerunning(hwc->idx)) {
+		list_del(&event->active_entry);
+		if (--box->n_active == 0)
+			uncore_pmu_cancel_hrtimer(box);
+		uncore_perf_event_update(box, event);
+		return;
+	}
+
 	if (__test_and_clear_bit(hwc->idx, box->active_mask)) {
 		uncore_disable_event(box, event);
 		box->n_active--;
@@ -512,6 +538,17 @@ static int uncore_pmu_event_add(struct perf_event *event, int flags)
 	if (!box)
 		return -ENODEV;
 
+	/*
+	 * The free funning counter is assigned in event_init().
+	 * The free running counter event and free running counter
+	 * are 1:1 mapped. It doesn't need to be tracked in event_list.
+	 */
+	if (uncore_pmc_freerunning(hwc->idx)) {
+		if (flags & PERF_EF_START)
+			uncore_pmu_event_start(event, 0);
+		return 0;
+	}
+
 	ret = n = uncore_collect_events(box, event, false);
 	if (ret < 0)
 		return ret;
@@ -570,6 +607,14 @@ static void uncore_pmu_event_del(struct perf_event *event, int flags)
 
 	uncore_pmu_event_stop(event, PERF_EF_UPDATE);
 
+	/*
+	 * The event for free running counter is not tracked by event_list.
+	 * It doesn't need to force event->hw.idx = -1 to reassign the counter.
+	 * Because the event and the free running counter are 1:1 mapped.
+	 */
+	if (uncore_pmc_freerunning(event->hw.idx))
+		return;
+
 	for (i = 0; i < box->n_events; i++) {
 		if (event == box->event_list[i]) {
 			uncore_put_event_constraint(box, event);
@@ -603,6 +648,10 @@ static int uncore_validate_group(struct intel_uncore_pmu *pmu,
 	struct intel_uncore_box *fake_box;
 	int ret = -EINVAL, n;
 
+	/* The free running counter is always active. */
+	if (uncore_pmc_freerunning(event->hw.idx))
+		return 0;
+
 	fake_box = uncore_alloc_box(pmu->type, NUMA_NO_NODE);
 	if (!fake_box)
 		return -ENOMEM;
@@ -690,6 +739,17 @@ static int uncore_pmu_event_init(struct perf_event *event)
 
 		/* fixed counters have event field hardcoded to zero */
 		hwc->config = 0ULL;
+	} else if (is_freerunning_event(event)) {
+		if (!check_valid_freerunning_event(box, event))
+			return -EINVAL;
+		event->hw.idx = UNCORE_PMC_IDX_FREERUNNING;
+		/*
+		 * The free running counter event and free running counter
+		 * are always 1:1 mapped.
+		 * The free running counter is always active.
+		 * Assign the free running counter here.
+		 */
+		event->hw.event_base = uncore_freerunning_counter(box, event);
 	} else {
 		hwc->config = event->attr.config &
 			      (pmu->type->event_mask | ((u64)pmu->type->event_mask_ext << 32));
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH V5 6/8] perf/x86/intel/uncore: SKX support for IIO free running counters
  2018-01-15 18:57 [PATCH V5 1/8] perf/x86/intel/uncore: customized event_read for client IMC uncore kan.liang
                   ` (3 preceding siblings ...)
  2018-01-15 18:57 ` [PATCH V5 5/8] perf/x86/intel/uncore: add infrastructure for free running counter kan.liang
@ 2018-01-15 18:57 ` kan.liang
  2018-01-15 18:57 ` [PATCH V5 7/8] perf/x86/intel/uncore: expose uncore_pmu_event functions kan.liang
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 27+ messages in thread
From: kan.liang @ 2018-01-15 18:57 UTC (permalink / raw)
  To: tglx, mingo, peterz, linux-kernel; +Cc: acme, eranian, ak, Kan Liang

From: Kan Liang <Kan.liang@intel.com>

As of Skylake Server, there are a number of free-running counters in
each IIO Box that collect counts for per box IO clocks and per Port
Input/Output x BW/Utilization.

The free running counter is read-only and always active. Counting will
be suspended only when the IIO Box is powered down.

There are three types of IIO free running counters on Skylake server, IO
CLOCKS counter, BANDWIDTH counters and UTILIZATION counters.
IO CLOCKS counter is to count IO clocks.
BANDWIDTH counters are to count inbound(PCIe->CPU)/outbound(CPU->PCIe)
bandwidth.
UTILIZATION counters are to count input/output utilization.

The bit width of the free running counters is 36-bits.

Signed-off-by: Kan Liang <Kan.liang@intel.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---

Change since V4:
 - Add reviewed-by

 arch/x86/events/intel/uncore_snbep.c | 58 ++++++++++++++++++++++++++++++++++++
 1 file changed, 58 insertions(+)

diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
index 6d8044a..0e2daca 100644
--- a/arch/x86/events/intel/uncore_snbep.c
+++ b/arch/x86/events/intel/uncore_snbep.c
@@ -3468,6 +3468,61 @@ static struct intel_uncore_ops skx_uncore_iio_ops = {
 	.read_counter		= uncore_msr_read_counter,
 };
 
+enum perf_uncore_iio_freerunning_type_id {
+	SKX_IIO_MSR_IOCLK			= 0,
+	SKX_IIO_MSR_BW				= 1,
+	SKX_IIO_MSR_UTIL			= 2,
+
+	SKX_IIO_FREERUNNING_TYPE_MAX,
+};
+
+
+static struct freerunning_counters skx_iio_freerunning[] = {
+	[SKX_IIO_MSR_IOCLK]	= { 0xa45, 0x1, 0x20, 1, 36 },
+	[SKX_IIO_MSR_BW]	= { 0xb00, 0x1, 0x10, 8, 36 },
+	[SKX_IIO_MSR_UTIL]	= { 0xb08, 0x1, 0x10, 8, 36 },
+};
+
+static struct uncore_event_desc skx_uncore_iio_events[] = {
+	/* Free-Running IO CLOCKS Counter */
+	INTEL_UNCORE_EVENT_DESC(ioclk,			"event=0xff,umask=0x10"),
+	/* Free-Running IIO BANDWIDTH Counters */
+	INTEL_UNCORE_EVENT_DESC(bw_in_port0,		"event=0xff,umask=0x20"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port0.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port0.unit,	"MiB"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port1,		"event=0xff,umask=0x21"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port1.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port1.unit,	"MiB"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port2,		"event=0xff,umask=0x22"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port2.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port2.unit,	"MiB"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port3,		"event=0xff,umask=0x23"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port3.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port3.unit,	"MiB"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port0,		"event=0xff,umask=0x24"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port0.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port0.unit,	"MiB"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port1,		"event=0xff,umask=0x25"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port1.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port1.unit,	"MiB"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port2,		"event=0xff,umask=0x26"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port2.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port2.unit,	"MiB"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port3,		"event=0xff,umask=0x27"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port3.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port3.unit,	"MiB"),
+	/* Free-running IIO UTILIZATION Counters */
+	INTEL_UNCORE_EVENT_DESC(util_in_port0,		"event=0xff,umask=0x30"),
+	INTEL_UNCORE_EVENT_DESC(util_out_port0,		"event=0xff,umask=0x31"),
+	INTEL_UNCORE_EVENT_DESC(util_in_port1,		"event=0xff,umask=0x32"),
+	INTEL_UNCORE_EVENT_DESC(util_out_port1,		"event=0xff,umask=0x33"),
+	INTEL_UNCORE_EVENT_DESC(util_in_port2,		"event=0xff,umask=0x34"),
+	INTEL_UNCORE_EVENT_DESC(util_out_port2,		"event=0xff,umask=0x35"),
+	INTEL_UNCORE_EVENT_DESC(util_in_port3,		"event=0xff,umask=0x36"),
+	INTEL_UNCORE_EVENT_DESC(util_out_port3,		"event=0xff,umask=0x37"),
+	{ /* end: all zeroes */ },
+};
+
 static struct intel_uncore_type skx_uncore_iio = {
 	.name			= "iio",
 	.num_counters		= 4,
@@ -3479,8 +3534,11 @@ static struct intel_uncore_type skx_uncore_iio = {
 	.event_mask_ext		= SKX_IIO_PMON_RAW_EVENT_MASK_EXT,
 	.box_ctl		= SKX_IIO0_MSR_PMON_BOX_CTL,
 	.msr_offset		= SKX_IIO_MSR_OFFSET,
+	.num_freerunning_types	= SKX_IIO_FREERUNNING_TYPE_MAX,
+	.freerunning		= skx_iio_freerunning,
 	.constraints		= skx_uncore_iio_constraints,
 	.ops			= &skx_uncore_iio_ops,
+	.event_descs		= skx_uncore_iio_events,
 	.format_group		= &skx_uncore_iio_format_group,
 };
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH V5 7/8] perf/x86/intel/uncore: expose uncore_pmu_event functions
  2018-01-15 18:57 [PATCH V5 1/8] perf/x86/intel/uncore: customized event_read for client IMC uncore kan.liang
                   ` (4 preceding siblings ...)
  2018-01-15 18:57 ` [PATCH V5 6/8] perf/x86/intel/uncore: SKX support for IIO free running counters kan.liang
@ 2018-01-15 18:57 ` kan.liang
  2018-01-15 18:57 ` [PATCH V5 8/8] perf/x86/intel/uncore: clean up client IMC uncore kan.liang
  2018-01-18  9:36 ` [PATCH V5 1/8] perf/x86/intel/uncore: customized event_read for " Thomas Gleixner
  7 siblings, 0 replies; 27+ messages in thread
From: kan.liang @ 2018-01-15 18:57 UTC (permalink / raw)
  To: tglx, mingo, peterz, linux-kernel; +Cc: acme, eranian, ak, Kan Liang

From: Kan Liang <Kan.liang@intel.com>

Some uncore has custom pmu. For custom pmu, it does not need to
customize everything. For example, it only needs to customize init()
function for client IMC uncore. Other functions like
add()/del()/start()/stop()/read() can use generic code.

Expose the uncore_pmu_event_add/del/start/stop functions.

Signed-off-by: Kan Liang <Kan.liang@intel.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---

Change since V4:
 - Add reviewed-by

 arch/x86/events/intel/uncore.c | 8 ++++----
 arch/x86/events/intel/uncore.h | 4 ++++
 2 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
index f38a7bb..fae2836 100644
--- a/arch/x86/events/intel/uncore.c
+++ b/arch/x86/events/intel/uncore.c
@@ -451,7 +451,7 @@ static int uncore_assign_events(struct intel_uncore_box *box, int assign[], int
 	return ret ? -EINVAL : 0;
 }
 
-static void uncore_pmu_event_start(struct perf_event *event, int flags)
+void uncore_pmu_event_start(struct perf_event *event, int flags)
 {
 	struct intel_uncore_box *box = uncore_event_to_box(event);
 	int idx = event->hw.idx;
@@ -491,7 +491,7 @@ static void uncore_pmu_event_start(struct perf_event *event, int flags)
 	}
 }
 
-static void uncore_pmu_event_stop(struct perf_event *event, int flags)
+void uncore_pmu_event_stop(struct perf_event *event, int flags)
 {
 	struct intel_uncore_box *box = uncore_event_to_box(event);
 	struct hw_perf_event *hwc = &event->hw;
@@ -528,7 +528,7 @@ static void uncore_pmu_event_stop(struct perf_event *event, int flags)
 	}
 }
 
-static int uncore_pmu_event_add(struct perf_event *event, int flags)
+int uncore_pmu_event_add(struct perf_event *event, int flags)
 {
 	struct intel_uncore_box *box = uncore_event_to_box(event);
 	struct hw_perf_event *hwc = &event->hw;
@@ -600,7 +600,7 @@ static int uncore_pmu_event_add(struct perf_event *event, int flags)
 	return 0;
 }
 
-static void uncore_pmu_event_del(struct perf_event *event, int flags)
+void uncore_pmu_event_del(struct perf_event *event, int flags)
 {
 	struct intel_uncore_box *box = uncore_event_to_box(event);
 	int i;
diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
index 280a242..ed9f9d3 100644
--- a/arch/x86/events/intel/uncore.h
+++ b/arch/x86/events/intel/uncore.h
@@ -463,6 +463,10 @@ struct intel_uncore_box *uncore_pmu_to_box(struct intel_uncore_pmu *pmu, int cpu
 u64 uncore_msr_read_counter(struct intel_uncore_box *box, struct perf_event *event);
 void uncore_pmu_start_hrtimer(struct intel_uncore_box *box);
 void uncore_pmu_cancel_hrtimer(struct intel_uncore_box *box);
+void uncore_pmu_event_start(struct perf_event *event, int flags);
+void uncore_pmu_event_stop(struct perf_event *event, int flags);
+int uncore_pmu_event_add(struct perf_event *event, int flags);
+void uncore_pmu_event_del(struct perf_event *event, int flags);
 void uncore_pmu_event_read(struct perf_event *event);
 void uncore_perf_event_update(struct intel_uncore_box *box, struct perf_event *event);
 struct event_constraint *
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH V5 8/8] perf/x86/intel/uncore: clean up client IMC uncore
  2018-01-15 18:57 [PATCH V5 1/8] perf/x86/intel/uncore: customized event_read for client IMC uncore kan.liang
                   ` (5 preceding siblings ...)
  2018-01-15 18:57 ` [PATCH V5 7/8] perf/x86/intel/uncore: expose uncore_pmu_event functions kan.liang
@ 2018-01-15 18:57 ` kan.liang
  2018-01-18  9:36 ` [PATCH V5 1/8] perf/x86/intel/uncore: customized event_read for " Thomas Gleixner
  7 siblings, 0 replies; 27+ messages in thread
From: kan.liang @ 2018-01-15 18:57 UTC (permalink / raw)
  To: tglx, mingo, peterz, linux-kernel; +Cc: acme, eranian, ak, Kan Liang

From: Kan Liang <Kan.liang@intel.com>

The counters in client IMC uncore are free running counters, not fixed
counters. It should be corrected. The new infrastructure for free
running counter should be applied.

Introduce free running counter type SNB_PCI_UNCORE_IMC_DATA for data
read and data write counters.

Keep the custom event_init() function compatible with old event
encoding.

Clean up other custom event_* functions.

Signed-off-by: Kan Liang <Kan.liang@intel.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---

Change since V4:
 - Add reviewed-by

 arch/x86/events/intel/uncore_snb.c | 132 ++++++-------------------------------
 1 file changed, 20 insertions(+), 112 deletions(-)

diff --git a/arch/x86/events/intel/uncore_snb.c b/arch/x86/events/intel/uncore_snb.c
index df53521..8527c3e 100644
--- a/arch/x86/events/intel/uncore_snb.c
+++ b/arch/x86/events/intel/uncore_snb.c
@@ -285,6 +285,15 @@ static struct uncore_event_desc snb_uncore_imc_events[] = {
 #define SNB_UNCORE_PCI_IMC_DATA_WRITES_BASE	0x5054
 #define SNB_UNCORE_PCI_IMC_CTR_BASE		SNB_UNCORE_PCI_IMC_DATA_READS_BASE
 
+enum perf_snb_uncore_imc_freerunning_types {
+	SNB_PCI_UNCORE_IMC_DATA		= 0,
+	SNB_PCI_UNCORE_IMC_FREERUNNING_TYPE_MAX,
+};
+
+static struct freerunning_counters snb_uncore_imc_freerunning[] = {
+	[SNB_PCI_UNCORE_IMC_DATA]     = { SNB_UNCORE_PCI_IMC_DATA_READS_BASE, 0x4, 0x0, 2, 32 },
+};
+
 static struct attribute *snb_uncore_imc_formats_attr[] = {
 	&format_attr_event.attr,
 	NULL,
@@ -341,9 +350,8 @@ static u64 snb_uncore_imc_read_counter(struct intel_uncore_box *box, struct perf
 }
 
 /*
- * custom event_init() function because we define our own fixed, free
- * running counters, so we do not want to conflict with generic uncore
- * logic. Also simplifies processing
+ * Keep the custom event_init() function compatible with old event
+ * encoding for free running counters.
  */
 static int snb_uncore_imc_event_init(struct perf_event *event)
 {
@@ -405,11 +413,11 @@ static int snb_uncore_imc_event_init(struct perf_event *event)
 	switch (cfg) {
 	case SNB_UNCORE_PCI_IMC_DATA_READS:
 		base = SNB_UNCORE_PCI_IMC_DATA_READS_BASE;
-		idx = UNCORE_PMC_IDX_FIXED;
+		idx = UNCORE_PMC_IDX_FREERUNNING;
 		break;
 	case SNB_UNCORE_PCI_IMC_DATA_WRITES:
 		base = SNB_UNCORE_PCI_IMC_DATA_WRITES_BASE;
-		idx = UNCORE_PMC_IDX_FIXED + 1;
+		idx = UNCORE_PMC_IDX_FREERUNNING;
 		break;
 	default:
 		return -EINVAL;
@@ -430,104 +438,6 @@ static int snb_uncore_imc_hw_config(struct intel_uncore_box *box, struct perf_ev
 	return 0;
 }
 
-static void snb_uncore_imc_event_start(struct perf_event *event, int flags)
-{
-	struct intel_uncore_box *box = uncore_event_to_box(event);
-	u64 count;
-
-	if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
-		return;
-
-	event->hw.state = 0;
-	box->n_active++;
-
-	list_add_tail(&event->active_entry, &box->active_list);
-
-	count = snb_uncore_imc_read_counter(box, event);
-	local64_set(&event->hw.prev_count, count);
-
-	if (box->n_active == 1)
-		uncore_pmu_start_hrtimer(box);
-}
-
-static void snb_uncore_imc_event_read(struct perf_event *event)
-{
-	struct intel_uncore_box *box = uncore_event_to_box(event);
-	u64 prev_count, new_count, delta;
-	int shift;
-
-	/*
-	 * There are two free running counters in IMC.
-	 * The index for the second one is hardcoded to
-	 * UNCORE_PMC_IDX_FIXED + 1.
-	 */
-	if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
-		shift = 64 - uncore_fixed_ctr_bits(box);
-	else
-		shift = 64 - uncore_perf_ctr_bits(box);
-
-	/* the hrtimer might modify the previous event value */
-again:
-	prev_count = local64_read(&event->hw.prev_count);
-	new_count = uncore_read_counter(box, event);
-	if (local64_xchg(&event->hw.prev_count, new_count) != prev_count)
-		goto again;
-
-	delta = (new_count << shift) - (prev_count << shift);
-	delta >>= shift;
-
-	local64_add(delta, &event->count);
-}
-
-static void snb_uncore_imc_event_stop(struct perf_event *event, int flags)
-{
-	struct intel_uncore_box *box = uncore_event_to_box(event);
-	struct hw_perf_event *hwc = &event->hw;
-
-	if (!(hwc->state & PERF_HES_STOPPED)) {
-		box->n_active--;
-
-		WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
-		hwc->state |= PERF_HES_STOPPED;
-
-		list_del(&event->active_entry);
-
-		if (box->n_active == 0)
-			uncore_pmu_cancel_hrtimer(box);
-	}
-
-	if ((flags & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) {
-		/*
-		 * Drain the remaining delta count out of a event
-		 * that we are disabling:
-		 */
-		snb_uncore_imc_event_read(event);
-		hwc->state |= PERF_HES_UPTODATE;
-	}
-}
-
-static int snb_uncore_imc_event_add(struct perf_event *event, int flags)
-{
-	struct intel_uncore_box *box = uncore_event_to_box(event);
-	struct hw_perf_event *hwc = &event->hw;
-
-	if (!box)
-		return -ENODEV;
-
-	hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
-	if (!(flags & PERF_EF_START))
-		hwc->state |= PERF_HES_ARCH;
-
-	snb_uncore_imc_event_start(event, 0);
-
-	return 0;
-}
-
-static void snb_uncore_imc_event_del(struct perf_event *event, int flags)
-{
-	snb_uncore_imc_event_stop(event, PERF_EF_UPDATE);
-}
-
 int snb_pci2phy_map_init(int devid)
 {
 	struct pci_dev *dev = NULL;
@@ -559,11 +469,11 @@ int snb_pci2phy_map_init(int devid)
 static struct pmu snb_uncore_imc_pmu = {
 	.task_ctx_nr	= perf_invalid_context,
 	.event_init	= snb_uncore_imc_event_init,
-	.add		= snb_uncore_imc_event_add,
-	.del		= snb_uncore_imc_event_del,
-	.start		= snb_uncore_imc_event_start,
-	.stop		= snb_uncore_imc_event_stop,
-	.read		= snb_uncore_imc_event_read,
+	.add		= uncore_pmu_event_add,
+	.del		= uncore_pmu_event_del,
+	.start		= uncore_pmu_event_start,
+	.stop		= uncore_pmu_event_stop,
+	.read		= uncore_pmu_event_read,
 };
 
 static struct intel_uncore_ops snb_uncore_imc_ops = {
@@ -581,12 +491,10 @@ static struct intel_uncore_type snb_uncore_imc = {
 	.name		= "imc",
 	.num_counters   = 2,
 	.num_boxes	= 1,
-	.fixed_ctr_bits	= 32,
-	.fixed_ctr	= SNB_UNCORE_PCI_IMC_CTR_BASE,
+	.num_freerunning_types	= SNB_PCI_UNCORE_IMC_FREERUNNING_TYPE_MAX,
+	.freerunning	= snb_uncore_imc_freerunning,
 	.event_descs	= snb_uncore_imc_events,
 	.format_group	= &snb_uncore_imc_format_group,
-	.perf_ctr	= SNB_UNCORE_PCI_IMC_DATA_READS_BASE,
-	.event_mask	= SNB_UNCORE_PCI_IMC_EVENT_MASK,
 	.ops		= &snb_uncore_imc_ops,
 	.pmu		= &snb_uncore_imc_pmu,
 };
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH V5 1/8] perf/x86/intel/uncore: customized event_read for client IMC uncore
  2018-01-15 18:57 [PATCH V5 1/8] perf/x86/intel/uncore: customized event_read for client IMC uncore kan.liang
                   ` (6 preceding siblings ...)
  2018-01-15 18:57 ` [PATCH V5 8/8] perf/x86/intel/uncore: clean up client IMC uncore kan.liang
@ 2018-01-18  9:36 ` Thomas Gleixner
  7 siblings, 0 replies; 27+ messages in thread
From: Thomas Gleixner @ 2018-01-18  9:36 UTC (permalink / raw)
  To: Kan Liang; +Cc: mingo, peterz, linux-kernel, acme, eranian, ak

On Mon, 15 Jan 2018, kan.liang@intel.com wrote:

> From: Kan Liang <Kan.liang@intel.com>
> 
> There are two free running counters for client IMC uncore. The custom
> event_init() function hardcode their index to 'UNCORE_PMC_IDX_FIXED' and
> 'UNCORE_PMC_IDX_FIXED + 1'. To support the 'UNCORE_PMC_IDX_FIXED + 1'
> case, the generic uncore_perf_event_update is obscurely hacked.
> The code quality issue will bring problem when new counter index is
> introduced into generic code. For example, free running counter index.
> 
> Introduce customized event_read function for client IMC uncore.
> The customized function is exactly copied from previous generic
> uncore_pmu_event_read.
> The 'UNCORE_PMC_IDX_FIXED + 1' case will be isolated for client IMC
> uncore only.
> 
> Signed-off-by: Kan Liang <Kan.liang@intel.com>

Reviewed-by: Thomas Gleixner <tglx@linutronix.de>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters
  2018-01-15 18:57 ` [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters kan.liang
@ 2018-01-18 13:32   ` Peter Zijlstra
  2018-01-18 17:43     ` Liang, Kan
  0 siblings, 1 reply; 27+ messages in thread
From: Peter Zijlstra @ 2018-01-18 13:32 UTC (permalink / raw)
  To: kan.liang; +Cc: tglx, mingo, linux-kernel, acme, eranian, ak

On Mon, Jan 15, 2018 at 10:57:05AM -0800, kan.liang@intel.com wrote:
> From: Kan Liang <Kan.liang@intel.com>
> 
> There are a number of free running counters introduced for uncore, which
> provide highly valuable information to a wide array of customers.
> For example, Skylake Server has IIO free running counters to collect
> Input/Output x BW/Utilization.
> The precious generic counters could be saved to collect other customer
> interested data.
> 
> The free running counter is read-only and always active. Current generic
> uncore code does not support this kind of counters.
> 
> Introduce a new index to indicate the free running counters. Only one
> index is enough for all free running counters. Because the free running
> countes are always active, and the event and free running counter are
> always 1:1 mapped. It does not need extra index to indicate the assigned
> counter.
> 
> Introduce some rules to encode the event for free running counters.
> - The event for free running counter has the same event code 0xff as the
>   event for fixed counter.
> - The umask of the event starts from 0x10. The umask which is less than
>   0x10 is reserved for the event of fixed counter.
> - The free running counters can be divided into different types
>   according to the MSR location, bit width or definition. The start
>   point of the umask for different type has 0x10 offset.
> For example, there are three types of IIO free running counters on
> Skylake server, IO CLOCKS counters, BANDWIDTH counters and UTILIZATION
> counters.
> The event code for all free running counters is 0xff.
> 'ioclk' is the first counter of IO CLOCKS. IO CLOCKS is the first type
> of free running counters, which umask starts from 0x10.
> So 'ioclk' is encoded as event=0xff,umask=0x10
> 'bw_in_port2' is the third counter of BANDWIDTH counters. BANDWIDTH is
> the second type which umask starts from 0x20.
> So 'bw_in_port2' is encoded as event=0xff,umask=0x22.
> 
> Introduce a new data structure to store free running counters related
> information for each type. It includes the number of counters, bit
> width, base address, offset between counters and offset between boxes.
> 
> Introduce several inline helpers to check index for fixed counter and
> free running counter, validate free running counter event, and retrieve
> the free running counter information according to box and event.

Sorry, none of this makes any sense, what?

WTH would all free running counters, which presumably count different
things, have the same event code ?

And whats the hackery with the umask do?

Please rewrite this in comprehensible form and also give rationale for
the various choices.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters
  2018-01-18 13:32   ` Peter Zijlstra
@ 2018-01-18 17:43     ` Liang, Kan
  2018-01-19 13:07       ` Peter Zijlstra
  0 siblings, 1 reply; 27+ messages in thread
From: Liang, Kan @ 2018-01-18 17:43 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: tglx, mingo, linux-kernel, acme, eranian, ak

> On Mon, Jan 15, 2018 at 10:57:05AM -0800, kan.liang@intel.com wrote:
> > From: Kan Liang <Kan.liang@intel.com>
> >
> > There are a number of free running counters introduced for uncore, which
> > provide highly valuable information to a wide array of customers.
> > For example, Skylake Server has IIO free running counters to collect
> > Input/Output x BW/Utilization.
> > The precious generic counters could be saved to collect other customer
> > interested data.
> >
> > The free running counter is read-only and always active. Current generic
> > uncore code does not support this kind of counters.
> >
> > Introduce a new index to indicate the free running counters. Only one
> > index is enough for all free running counters. Because the free running
> > countes are always active, and the event and free running counter are
> > always 1:1 mapped. It does not need extra index to indicate the assigned
> > counter.
> >
> > Introduce some rules to encode the event for free running counters.
> > - The event for free running counter has the same event code 0xff as the
> >   event for fixed counter.
> > - The umask of the event starts from 0x10. The umask which is less than
> >   0x10 is reserved for the event of fixed counter.
> > - The free running counters can be divided into different types
> >   according to the MSR location, bit width or definition. The start
> >   point of the umask for different type has 0x10 offset.
> > For example, there are three types of IIO free running counters on
> > Skylake server, IO CLOCKS counters, BANDWIDTH counters and
> UTILIZATION
> > counters.
> > The event code for all free running counters is 0xff.
> > 'ioclk' is the first counter of IO CLOCKS. IO CLOCKS is the first type
> > of free running counters, which umask starts from 0x10.
> > So 'ioclk' is encoded as event=0xff,umask=0x10
> > 'bw_in_port2' is the third counter of BANDWIDTH counters. BANDWIDTH is
> > the second type which umask starts from 0x20.
> > So 'bw_in_port2' is encoded as event=0xff,umask=0x22.
> >
> > Introduce a new data structure to store free running counters related
> > information for each type. It includes the number of counters, bit
> > width, base address, offset between counters and offset between boxes.
> >
> > Introduce several inline helpers to check index for fixed counter and
> > free running counter, validate free running counter event, and retrieve
> > the free running counter information according to box and event.
>
> Sorry, none of this makes any sense, what?
>
> WTH would all free running counters, which presumably count different
> things, have the same event code ?
>
> And whats the hackery with the umask do?
>
> Please rewrite this in comprehensible form and also give rationale for
> the various choices.

I rewrote the encoding part as blow. Does it look better?

------

In the uncore document, there is no event-code assigned to free running counters.
Some events need to be defined to indicate the free running counters.
The events are encoded as event-code + umask-code.

The event-code for all free running counters is 0xff, which is the same as
the fixed counters.
- It has not been decided what code will be used for common events on future platforms.
  0xff is the only one which will definitely not be used as any common event-code.
- Free running counters and fixed counters are both dedicated counters.
   It makes sense to share the event-code between these two types of counters.
- Even in the existing codes, the fixed counters for core, that have the same event-code,
  may count different things. Hence, it should not surprise the users if the free running
  counters that share the same event-code also count different things.
  Umask will be used to distinguish the counters.

The umask-code is used to distinguish a fixed counter and a free running counter,
and different types of free running counters.
For fixed counters, the umask-code is 0x0X.
X indicates the index of the fixed counter, which starts from 0.
- Compatible with the old event encoding.
- Currently, there is only one fixed counter. There are still 15 reserved spaces for
   Extension.
For free running counters, the umask-code uses the rest of the space.
It would bare the format of 0xXY.
X stands for the type of free running counters, which starts from 1.
Y stands for the index of free running counters of same type, which starts from 0.
- The free-running counters do different thing. It can be categorized to several
   types, according to the MSR location, bit width and definition.
   E.g. there are three types of IIO free running counters on Skylake server to
   monitor IO CLOCKS, BANDWIDTH and UTILIZATION  on different ports.
   It makes it easy to locate the free-running counter of a specific type.
- So far, there are at most 8 counters of each type.  There are still 8
  reserved spaces for extension.

------

Thanks,
Kan

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters
  2018-01-18 17:43     ` Liang, Kan
@ 2018-01-19 13:07       ` Peter Zijlstra
  2018-01-19 15:15         ` Liang, Kan
  0 siblings, 1 reply; 27+ messages in thread
From: Peter Zijlstra @ 2018-01-19 13:07 UTC (permalink / raw)
  To: Liang, Kan; +Cc: tglx, mingo, linux-kernel, acme, eranian, ak

On Thu, Jan 18, 2018 at 05:43:10PM +0000, Liang, Kan wrote:
> In the uncore document, there is no event-code assigned to free running counters.
> Some events need to be defined to indicate the free running counters.
> The events are encoded as event-code + umask-code.
> 
> The event-code for all free running counters is 0xff, which is the same as
> the fixed counters.

Is it possible to count the same things using the generic counters?

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters
  2018-01-19 13:07       ` Peter Zijlstra
@ 2018-01-19 15:15         ` Liang, Kan
  2018-01-19 17:19           ` Peter Zijlstra
  0 siblings, 1 reply; 27+ messages in thread
From: Liang, Kan @ 2018-01-19 15:15 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: tglx, mingo, linux-kernel, acme, eranian, ak

> 
> On Thu, Jan 18, 2018 at 05:43:10PM +0000, Liang, Kan wrote:
> > In the uncore document, there is no event-code assigned to free running
> counters.
> > Some events need to be defined to indicate the free running counters.
> > The events are encoded as event-code + umask-code.
> >
> > The event-code for all free running counters is 0xff, which is the
> > same as the fixed counters.
> 
> Is it possible to count the same things using the generic counters?
> 

Yes, there are events for generic counters to count bandwidth and
utilization.

The reasons of introducing free running counters are
- To provide highly valuable information (bandwidth and utilization)
   which most of the customers are interested in
- To save on the precious generic counters


Thanks,
Kan

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters
  2018-01-19 15:15         ` Liang, Kan
@ 2018-01-19 17:19           ` Peter Zijlstra
  2018-01-19 17:34             ` Stephane Eranian
  2018-01-19 17:53             ` Liang, Kan
  0 siblings, 2 replies; 27+ messages in thread
From: Peter Zijlstra @ 2018-01-19 17:19 UTC (permalink / raw)
  To: Liang, Kan; +Cc: tglx, mingo, linux-kernel, acme, eranian, ak

On Fri, Jan 19, 2018 at 03:15:00PM +0000, Liang, Kan wrote:
> > 
> > On Thu, Jan 18, 2018 at 05:43:10PM +0000, Liang, Kan wrote:
> > > In the uncore document, there is no event-code assigned to free running
> > counters.
> > > Some events need to be defined to indicate the free running counters.
> > > The events are encoded as event-code + umask-code.
> > >
> > > The event-code for all free running counters is 0xff, which is the
> > > same as the fixed counters.
> > 
> > Is it possible to count the same things using the generic counters?
> > 
> 
> Yes, there are events for generic counters to count bandwidth and
> utilization.
> 
> The reasons of introducing free running counters are
> - To provide highly valuable information (bandwidth and utilization)
>    which most of the customers are interested in
> - To save on the precious generic counters

_IF_ the exact same counters are available on the GPs then we must use the
same event code for them and use event scheduling to place them on
fixed/free-running counters when possible.

That's what we do for the CPU PMU's fixed counters too.

Don't invent magic event codes just because.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters
  2018-01-19 17:19           ` Peter Zijlstra
@ 2018-01-19 17:34             ` Stephane Eranian
  2018-01-19 17:53             ` Liang, Kan
  1 sibling, 0 replies; 27+ messages in thread
From: Stephane Eranian @ 2018-01-19 17:34 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Liang, Kan, tglx, mingo, linux-kernel, acme, ak

On Fri, Jan 19, 2018 at 9:19 AM, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Fri, Jan 19, 2018 at 03:15:00PM +0000, Liang, Kan wrote:
> > >
> > > On Thu, Jan 18, 2018 at 05:43:10PM +0000, Liang, Kan wrote:
> > > > In the uncore document, there is no event-code assigned to free running
> > > counters.
> > > > Some events need to be defined to indicate the free running counters.
> > > > The events are encoded as event-code + umask-code.
> > > >
> > > > The event-code for all free running counters is 0xff, which is the
> > > > same as the fixed counters.
> > >
> > > Is it possible to count the same things using the generic counters?
> > >
> >
> > Yes, there are events for generic counters to count bandwidth and
> > utilization.
> >
> > The reasons of introducing free running counters are
> > - To provide highly valuable information (bandwidth and utilization)
> >    which most of the customers are interested in
> > - To save on the precious generic counters
>
> _IF_ the exact same counters are available on the GPs then we must use the
> same event code for them and use event scheduling to place them on
> fixed/free-running counters when possible.
>
> That's what we do for the CPU PMU's fixed counters too.
>
> Don't invent magic event codes just because.

I agree with Peter here. The scheduling algorithm should take care of this.
There is only one case in the core PMU where we had to invent an event code.
That was for UNHALTED_REFERENCE_CYCLES which could only be measured
on a fixed counters. But for the other two, the scheduling code takes
care of them
taking into consideration filter bits.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters
  2018-01-19 17:19           ` Peter Zijlstra
  2018-01-19 17:34             ` Stephane Eranian
@ 2018-01-19 17:53             ` Liang, Kan
  2018-01-19 17:55               ` Stephane Eranian
  1 sibling, 1 reply; 27+ messages in thread
From: Liang, Kan @ 2018-01-19 17:53 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: tglx, mingo, linux-kernel, acme, eranian, ak

> On Fri, Jan 19, 2018 at 03:15:00PM +0000, Liang, Kan wrote:
> > >
> > > On Thu, Jan 18, 2018 at 05:43:10PM +0000, Liang, Kan wrote:
> > > > In the uncore document, there is no event-code assigned to free
> > > > running
> > > counters.
> > > > Some events need to be defined to indicate the free running counters.
> > > > The events are encoded as event-code + umask-code.
> > > >
> > > > The event-code for all free running counters is 0xff, which is the
> > > > same as the fixed counters.
> > >
> > > Is it possible to count the same things using the generic counters?
> > >
> >
> > Yes, there are events for generic counters to count bandwidth and
> > utilization.
> >
> > The reasons of introducing free running counters are
> > - To provide highly valuable information (bandwidth and utilization)
> >    which most of the customers are interested in
> > - To save on the precious generic counters
> 
> _IF_ the exact same counters are available on the GPs then we must use the
> same event code for them and use event scheduling to place them on
> fixed/free-running counters when possible.

OK. I will check if there are the exact same events on GPs for
those free running counters.

Thanks,
Kan

> 
> That's what we do for the CPU PMU's fixed counters too.
> 
> Don't invent magic event codes just because.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters
  2018-01-19 17:53             ` Liang, Kan
@ 2018-01-19 17:55               ` Stephane Eranian
  2018-01-19 18:00                 ` Liang, Kan
  0 siblings, 1 reply; 27+ messages in thread
From: Stephane Eranian @ 2018-01-19 17:55 UTC (permalink / raw)
  To: Liang, Kan; +Cc: Peter Zijlstra, tglx, mingo, linux-kernel, acme, ak

On Fri, Jan 19, 2018 at 9:53 AM, Liang, Kan <kan.liang@intel.com> wrote:
>> On Fri, Jan 19, 2018 at 03:15:00PM +0000, Liang, Kan wrote:
>> > >
>> > > On Thu, Jan 18, 2018 at 05:43:10PM +0000, Liang, Kan wrote:
>> > > > In the uncore document, there is no event-code assigned to free
>> > > > running
>> > > counters.
>> > > > Some events need to be defined to indicate the free running counters.
>> > > > The events are encoded as event-code + umask-code.
>> > > >
>> > > > The event-code for all free running counters is 0xff, which is the
>> > > > same as the fixed counters.
>> > >
>> > > Is it possible to count the same things using the generic counters?
>> > >
>> >
>> > Yes, there are events for generic counters to count bandwidth and
>> > utilization.
>> >
>> > The reasons of introducing free running counters are
>> > - To provide highly valuable information (bandwidth and utilization)
>> >    which most of the customers are interested in
>> > - To save on the precious generic counters
>>
>> _IF_ the exact same counters are available on the GPs then we must use the
>> same event code for them and use event scheduling to place them on
>> fixed/free-running counters when possible.
>
> OK. I will check if there are the exact same events on GPs for
> those free running counters.
>
You can measure the bandwidth and utilization on the GP. I am doing it all
the time.

> Thanks,
> Kan
>
>>
>> That's what we do for the CPU PMU's fixed counters too.
>>
>> Don't invent magic event codes just because.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters
  2018-01-19 17:55               ` Stephane Eranian
@ 2018-01-19 18:00                 ` Liang, Kan
  2018-01-19 20:18                   ` Stephane Eranian
  2018-01-19 20:24                   ` Andi Kleen
  0 siblings, 2 replies; 27+ messages in thread
From: Liang, Kan @ 2018-01-19 18:00 UTC (permalink / raw)
  To: Stephane Eranian; +Cc: Peter Zijlstra, tglx, mingo, linux-kernel, acme, ak

> On Fri, Jan 19, 2018 at 9:53 AM, Liang, Kan <kan.liang@intel.com> wrote:
> >> On Fri, Jan 19, 2018 at 03:15:00PM +0000, Liang, Kan wrote:
> >> > >
> >> > > On Thu, Jan 18, 2018 at 05:43:10PM +0000, Liang, Kan wrote:
> >> > > > In the uncore document, there is no event-code assigned to free
> >> > > > running
> >> > > counters.
> >> > > > Some events need to be defined to indicate the free running
> counters.
> >> > > > The events are encoded as event-code + umask-code.
> >> > > >
> >> > > > The event-code for all free running counters is 0xff, which is
> >> > > > the same as the fixed counters.
> >> > >
> >> > > Is it possible to count the same things using the generic counters?
> >> > >
> >> >
> >> > Yes, there are events for generic counters to count bandwidth and
> >> > utilization.
> >> >
> >> > The reasons of introducing free running counters are
> >> > - To provide highly valuable information (bandwidth and utilization)
> >> >    which most of the customers are interested in
> >> > - To save on the precious generic counters
> >>
> >> _IF_ the exact same counters are available on the GPs then we must
> >> use the same event code for them and use event scheduling to place
> >> them on fixed/free-running counters when possible.
> >
> > OK. I will check if there are the exact same events on GPs for those
> > free running counters.
> >
> You can measure the bandwidth and utilization on the GP. I am doing it all
> the time.

Oh, think a bit more.
I think we cannot do the same thing as we did for CPU PMU's fixed counters.

The counters here are free running counters. They cannot be start/stop.

Thanks,
Kan

> 
> > Thanks,
> > Kan
> >
> >>
> >> That's what we do for the CPU PMU's fixed counters too.
> >>
> >> Don't invent magic event codes just because.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters
  2018-01-19 18:00                 ` Liang, Kan
@ 2018-01-19 20:18                   ` Stephane Eranian
  2018-01-19 20:24                   ` Andi Kleen
  1 sibling, 0 replies; 27+ messages in thread
From: Stephane Eranian @ 2018-01-19 20:18 UTC (permalink / raw)
  To: Liang, Kan; +Cc: Peter Zijlstra, tglx, mingo, linux-kernel, acme, ak

On Fri, Jan 19, 2018 at 10:00 AM, Liang, Kan <kan.liang@intel.com> wrote:
>
> > On Fri, Jan 19, 2018 at 9:53 AM, Liang, Kan <kan.liang@intel.com> wrote:
> > >> On Fri, Jan 19, 2018 at 03:15:00PM +0000, Liang, Kan wrote:
> > >> > >
> > >> > > On Thu, Jan 18, 2018 at 05:43:10PM +0000, Liang, Kan wrote:
> > >> > > > In the uncore document, there is no event-code assigned to free
> > >> > > > running
> > >> > > counters.
> > >> > > > Some events need to be defined to indicate the free running
> > counters.
> > >> > > > The events are encoded as event-code + umask-code.
> > >> > > >
> > >> > > > The event-code for all free running counters is 0xff, which is
> > >> > > > the same as the fixed counters.
> > >> > >
> > >> > > Is it possible to count the same things using the generic counters?
> > >> > >
> > >> >
> > >> > Yes, there are events for generic counters to count bandwidth and
> > >> > utilization.
> > >> >
> > >> > The reasons of introducing free running counters are
> > >> > - To provide highly valuable information (bandwidth and utilization)
> > >> >    which most of the customers are interested in
> > >> > - To save on the precious generic counters
> > >>
> > >> _IF_ the exact same counters are available on the GPs then we must
> > >> use the same event code for them and use event scheduling to place
> > >> them on fixed/free-running counters when possible.
> > >
> > > OK. I will check if there are the exact same events on GPs for those
> > > free running counters.
> > >
> > You can measure the bandwidth and utilization on the GP. I am doing it all
> > the time.
>
> Oh, think a bit more.
> I think we cannot do the same thing as we did for CPU PMU's fixed counters.
>
> The counters here are free running counters. They cannot be start/stop.
>
I think they can because this is all controlled by the event software state. You
don't need to read the hw counter each time. If the event is inactive, you
return the saved 64-bit counter value. And I believe, this is independent from
scheduling.


>
> Thanks,
> Kan
>
> >
> > > Thanks,
> > > Kan
> > >
> > >>
> > >> That's what we do for the CPU PMU's fixed counters too.
> > >>
> > >> Don't invent magic event codes just because.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters
  2018-01-19 18:00                 ` Liang, Kan
  2018-01-19 20:18                   ` Stephane Eranian
@ 2018-01-19 20:24                   ` Andi Kleen
  2018-01-19 20:50                     ` Stephane Eranian
  2018-01-19 21:33                     ` Peter Zijlstra
  1 sibling, 2 replies; 27+ messages in thread
From: Andi Kleen @ 2018-01-19 20:24 UTC (permalink / raw)
  To: Liang, Kan
  Cc: Stephane Eranian, Peter Zijlstra, tglx, mingo, linux-kernel, acme

> Oh, think a bit more.
> I think we cannot do the same thing as we did for CPU PMU's fixed counters.
> 
> The counters here are free running counters. They cannot be start/stop.

Yes free running counter have completely different semantics. They
need a separate event code.

-Andi

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters
  2018-01-19 20:24                   ` Andi Kleen
@ 2018-01-19 20:50                     ` Stephane Eranian
  2018-01-19 20:51                       ` Stephane Eranian
  2018-01-19 21:33                     ` Peter Zijlstra
  1 sibling, 1 reply; 27+ messages in thread
From: Stephane Eranian @ 2018-01-19 20:50 UTC (permalink / raw)
  To: Andi Kleen; +Cc: Liang, Kan, Peter Zijlstra, tglx, mingo, linux-kernel, acme

On Fri, Jan 19, 2018 at 12:24 PM, Andi Kleen <ak@linux.intel.com> wrote:
>> Oh, think a bit more.
>> I think we cannot do the same thing as we did for CPU PMU's fixed counters.
>>
>> The counters here are free running counters. They cannot be start/stop.
>
> Yes free running counter have completely different semantics. They
> need a separate event code.
>
Are you saying, you can be shared with no multiplexing?

> -Andi

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters
  2018-01-19 20:50                     ` Stephane Eranian
@ 2018-01-19 20:51                       ` Stephane Eranian
  2018-01-20  1:24                         ` Andi Kleen
  0 siblings, 1 reply; 27+ messages in thread
From: Stephane Eranian @ 2018-01-19 20:51 UTC (permalink / raw)
  To: Andi Kleen; +Cc: Liang, Kan, Peter Zijlstra, tglx, mingo, linux-kernel, acme

On Fri, Jan 19, 2018 at 12:50 PM, Stephane Eranian <eranian@google.com> wrote:
> On Fri, Jan 19, 2018 at 12:24 PM, Andi Kleen <ak@linux.intel.com> wrote:
>>> Oh, think a bit more.
>>> I think we cannot do the same thing as we did for CPU PMU's fixed counters.
>>>
>>> The counters here are free running counters. They cannot be start/stop.
>>
>> Yes free running counter have completely different semantics. They
>> need a separate event code.
>>
> Are you saying, you can be shared with no multiplexing?
>
Obviously not you, but the counters ;-)

>> -Andi

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters
  2018-01-19 20:24                   ` Andi Kleen
  2018-01-19 20:50                     ` Stephane Eranian
@ 2018-01-19 21:33                     ` Peter Zijlstra
  2018-01-23 22:00                       ` Liang, Kan
  1 sibling, 1 reply; 27+ messages in thread
From: Peter Zijlstra @ 2018-01-19 21:33 UTC (permalink / raw)
  To: Andi Kleen; +Cc: Liang, Kan, Stephane Eranian, tglx, mingo, linux-kernel, acme

On Fri, Jan 19, 2018 at 12:24:17PM -0800, Andi Kleen wrote:
> > Oh, think a bit more.
> > I think we cannot do the same thing as we did for CPU PMU's fixed counters.
> > 
> > The counters here are free running counters. They cannot be start/stop.
> 
> Yes free running counter have completely different semantics. They
> need a separate event code.

The only thing that matters is if they count the same thing or not.

The not start/stop thing is not important. See arch/x86/events/msr.c on
how to deal with that. The short story is that you simply ignore stop
and update the prev_count on start. Then any next update will increment
with the correct delta.

(if the counter is short you also need to run a timer to deal with
wraps).

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters
  2018-01-19 20:51                       ` Stephane Eranian
@ 2018-01-20  1:24                         ` Andi Kleen
  0 siblings, 0 replies; 27+ messages in thread
From: Andi Kleen @ 2018-01-20  1:24 UTC (permalink / raw)
  To: Stephane Eranian
  Cc: Liang, Kan, Peter Zijlstra, tglx, mingo, linux-kernel, acme

On Fri, Jan 19, 2018 at 12:51:08PM -0800, Stephane Eranian wrote:
> On Fri, Jan 19, 2018 at 12:50 PM, Stephane Eranian <eranian@google.com> wrote:
> > On Fri, Jan 19, 2018 at 12:24 PM, Andi Kleen <ak@linux.intel.com> wrote:
> >>> Oh, think a bit more.
> >>> I think we cannot do the same thing as we did for CPU PMU's fixed counters.
> >>>
> >>> The counters here are free running counters. They cannot be start/stop.
> >>
> >> Yes free running counter have completely different semantics. They
> >> need a separate event code.
> >>
> > Are you saying, you can be shared with no multiplexing?
> >
> Obviously not you, but the counters ;-)

Yes free running can be always shared.

-Andi

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters
  2018-01-19 21:33                     ` Peter Zijlstra
@ 2018-01-23 22:00                       ` Liang, Kan
  2018-01-24 10:17                         ` Peter Zijlstra
  0 siblings, 1 reply; 27+ messages in thread
From: Liang, Kan @ 2018-01-23 22:00 UTC (permalink / raw)
  To: Peter Zijlstra, Andi Kleen
  Cc: Stephane Eranian, tglx, mingo, linux-kernel, acme

> On Fri, Jan 19, 2018 at 12:24:17PM -0800, Andi Kleen wrote:
> > > Oh, think a bit more.
> > > I think we cannot do the same thing as we did for CPU PMU's fixed
> counters.
> > >
> > > The counters here are free running counters. They cannot be start/stop.
> >
> > Yes free running counter have completely different semantics. They
> > need a separate event code.
> 
> The only thing that matters is if they count the same thing or not.
>

Hi Peter,

There is NO event available on the GPs, that is exactly the same as
the free-running counters.

For example, the BW free-running counters count the requests associated
with writes and completions.
The most similar events on the GPs are DATA_REQ_{OF,BY}_CPU.* events.
Except that some of their sub-events count requests which not completions.
There are also other minor differences.
So we don't have alternative events for the free-running counters.
I think we have to use 0xff.

For details, please refer to the uncore PMU guide. 
https://www.intel.com/content/www/us/en/processors/xeon/scalable/xeon-scalable-uncore-performance-monitoring-manual.html

Thanks,
Kan
 
> The not start/stop thing is not important. See arch/x86/events/msr.c on how
> to deal with that. The short story is that you simply ignore stop and update
> the prev_count on start. Then any next update will increment with the
> correct delta.
> 
> (if the counter is short you also need to run a timer to deal with wraps).

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters
  2018-01-23 22:00                       ` Liang, Kan
@ 2018-01-24 10:17                         ` Peter Zijlstra
  2018-01-24 15:46                           ` Liang, Kan
  0 siblings, 1 reply; 27+ messages in thread
From: Peter Zijlstra @ 2018-01-24 10:17 UTC (permalink / raw)
  To: Liang, Kan; +Cc: Andi Kleen, Stephane Eranian, tglx, mingo, linux-kernel, acme

On Tue, Jan 23, 2018 at 10:00:58PM +0000, Liang, Kan wrote:
> > On Fri, Jan 19, 2018 at 12:24:17PM -0800, Andi Kleen wrote:
> > > > Oh, think a bit more.
> > > > I think we cannot do the same thing as we did for CPU PMU's fixed
> > counters.
> > > >
> > > > The counters here are free running counters. They cannot be start/stop.
> > >
> > > Yes free running counter have completely different semantics. They
> > > need a separate event code.
> > 
> > The only thing that matters is if they count the same thing or not.
> >
> 
> Hi Peter,
> 
> There is NO event available on the GPs, that is exactly the same as
> the free-running counters.
> 
> For example, the BW free-running counters count the requests associated
> with writes and completions.
> The most similar events on the GPs are DATA_REQ_{OF,BY}_CPU.* events.
> Except that some of their sub-events count requests which not completions.
> There are also other minor differences.
> So we don't have alternative events for the free-running counters.
> I think we have to use 0xff.

OK, but explicitly mention this as the reason for having to invent event
codes. Them being fixed purpose or free running isn't a valid reason for
that.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters
  2018-01-24 10:17                         ` Peter Zijlstra
@ 2018-01-24 15:46                           ` Liang, Kan
  0 siblings, 0 replies; 27+ messages in thread
From: Liang, Kan @ 2018-01-24 15:46 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Andi Kleen, Stephane Eranian, tglx, mingo, linux-kernel, acme

> On Tue, Jan 23, 2018 at 10:00:58PM +0000, Liang, Kan wrote:
> > > On Fri, Jan 19, 2018 at 12:24:17PM -0800, Andi Kleen wrote:
> > > > > Oh, think a bit more.
> > > > > I think we cannot do the same thing as we did for CPU PMU's fixed
> > > counters.
> > > > >
> > > > > The counters here are free running counters. They cannot be
> start/stop.
> > > >
> > > > Yes free running counter have completely different semantics. They
> > > > need a separate event code.
> > >
> > > The only thing that matters is if they count the same thing or not.
> > >
> >
> > Hi Peter,
> >
> > There is NO event available on the GPs, that is exactly the same as
> > the free-running counters.
> >
> > For example, the BW free-running counters count the requests associated
> > with writes and completions.
> > The most similar events on the GPs are DATA_REQ_{OF,BY}_CPU.* events.
> > Except that some of their sub-events count requests which not completions.
> > There are also other minor differences.
> > So we don't have alternative events for the free-running counters.
> > I think we have to use 0xff.
> 
> OK, but explicitly mention this as the reason for having to invent event
> codes. Them being fixed purpose or free running isn't a valid reason for
> that.

Sure, I will add it in V6.

Thanks,
Kan

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2018-01-24 15:47 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-15 18:57 [PATCH V5 1/8] perf/x86/intel/uncore: customized event_read for client IMC uncore kan.liang
2018-01-15 18:57 ` [PATCH V5 2/8] perf/x86/intel/uncore: correct fixed counter index check for NHM kan.liang
2018-01-15 18:57 ` [PATCH V5 3/8] perf/x86/intel/uncore: correct fixed counter index check in generic code kan.liang
2018-01-15 18:57 ` [PATCH V5 4/8] perf/x86/intel/uncore: add new data structures for free running counters kan.liang
2018-01-18 13:32   ` Peter Zijlstra
2018-01-18 17:43     ` Liang, Kan
2018-01-19 13:07       ` Peter Zijlstra
2018-01-19 15:15         ` Liang, Kan
2018-01-19 17:19           ` Peter Zijlstra
2018-01-19 17:34             ` Stephane Eranian
2018-01-19 17:53             ` Liang, Kan
2018-01-19 17:55               ` Stephane Eranian
2018-01-19 18:00                 ` Liang, Kan
2018-01-19 20:18                   ` Stephane Eranian
2018-01-19 20:24                   ` Andi Kleen
2018-01-19 20:50                     ` Stephane Eranian
2018-01-19 20:51                       ` Stephane Eranian
2018-01-20  1:24                         ` Andi Kleen
2018-01-19 21:33                     ` Peter Zijlstra
2018-01-23 22:00                       ` Liang, Kan
2018-01-24 10:17                         ` Peter Zijlstra
2018-01-24 15:46                           ` Liang, Kan
2018-01-15 18:57 ` [PATCH V5 5/8] perf/x86/intel/uncore: add infrastructure for free running counter kan.liang
2018-01-15 18:57 ` [PATCH V5 6/8] perf/x86/intel/uncore: SKX support for IIO free running counters kan.liang
2018-01-15 18:57 ` [PATCH V5 7/8] perf/x86/intel/uncore: expose uncore_pmu_event functions kan.liang
2018-01-15 18:57 ` [PATCH V5 8/8] perf/x86/intel/uncore: clean up client IMC uncore kan.liang
2018-01-18  9:36 ` [PATCH V5 1/8] perf/x86/intel/uncore: customized event_read for " Thomas Gleixner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.