All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for client IMC uncore
@ 2017-10-24 11:05 kan.liang
  2017-10-24 11:05 ` [PATCH V3 2/5] perf/x86/intel/uncore: add infrastructure for free running counter kan.liang
                   ` (4 more replies)
  0 siblings, 5 replies; 17+ messages in thread
From: kan.liang @ 2017-10-24 11:05 UTC (permalink / raw)
  To: tglx, peterz, mingo, linux-kernel; +Cc: acme, eranian, ak, Kan Liang

From: Kan Liang <Kan.liang@intel.com>

The client IMC uncore obscurely hack the generic
uncore_perf_event_update to support the 'UNCORE_PMC_IDX_FIXED + 1' case.
The code quality issue will bring problem when new counter index is
introduced into generic code. For example, free running counter.

Introduce customized pmu event_read function for client IMC uncore.
The customized function is exactly copied from previous generic
uncore_pmu_event_read.

Correct the fixed counter checking code in uncore_perf_event_update.

Signed-off-by: Kan Liang <Kan.liang@intel.com>
---

Change since V2:
 - New patch to fix event->hw.idx >= UNCORE_PMC_IDX_FIXED in generic code.
   Temporarily add customized pmu event_read function. Patch 5/5 will clean up
   the customized event_* functions for client IMC uncore

 arch/x86/events/intel/uncore.c     |  2 +-
 arch/x86/events/intel/uncore_snb.c | 26 +++++++++++++++++++++++++-
 2 files changed, 26 insertions(+), 2 deletions(-)

diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
index 1c5390f..3b8cd88 100644
--- a/arch/x86/events/intel/uncore.c
+++ b/arch/x86/events/intel/uncore.c
@@ -218,7 +218,7 @@ void uncore_perf_event_update(struct intel_uncore_box *box, struct perf_event *e
 	u64 prev_count, new_count, delta;
 	int shift;
 
-	if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
+	if (event->hw.idx == UNCORE_PMC_IDX_FIXED)
 		shift = 64 - uncore_fixed_ctr_bits(box);
 	else
 		shift = 64 - uncore_perf_ctr_bits(box);
diff --git a/arch/x86/events/intel/uncore_snb.c b/arch/x86/events/intel/uncore_snb.c
index db1127c..9d5cd3f 100644
--- a/arch/x86/events/intel/uncore_snb.c
+++ b/arch/x86/events/intel/uncore_snb.c
@@ -498,6 +498,30 @@ static void snb_uncore_imc_event_del(struct perf_event *event, int flags)
 	snb_uncore_imc_event_stop(event, PERF_EF_UPDATE);
 }
 
+static void snb_uncore_imc_event_read(struct perf_event *event)
+{
+	struct intel_uncore_box *box = uncore_event_to_box(event);
+	u64 prev_count, new_count, delta;
+	int shift;
+
+	if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
+		shift = 64 - uncore_fixed_ctr_bits(box);
+	else
+		shift = 64 - uncore_perf_ctr_bits(box);
+
+	/* the hrtimer might modify the previous event value */
+again:
+	prev_count = local64_read(&event->hw.prev_count);
+	new_count = uncore_read_counter(box, event);
+	if (local64_xchg(&event->hw.prev_count, new_count) != prev_count)
+		goto again;
+
+	delta = (new_count << shift) - (prev_count << shift);
+	delta >>= shift;
+
+	local64_add(delta, &event->count);
+}
+
 int snb_pci2phy_map_init(int devid)
 {
 	struct pci_dev *dev = NULL;
@@ -533,7 +557,7 @@ static struct pmu snb_uncore_imc_pmu = {
 	.del		= snb_uncore_imc_event_del,
 	.start		= snb_uncore_imc_event_start,
 	.stop		= snb_uncore_imc_event_stop,
-	.read		= uncore_pmu_event_read,
+	.read		= snb_uncore_imc_event_read,
 };
 
 static struct intel_uncore_ops snb_uncore_imc_ops = {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH V3 2/5] perf/x86/intel/uncore: add infrastructure for free running counter
  2017-10-24 11:05 [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for client IMC uncore kan.liang
@ 2017-10-24 11:05 ` kan.liang
  2017-10-24 11:05 ` [PATCH V3 3/5] perf/x86/intel/uncore: SKX support for IIO " kan.liang
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 17+ messages in thread
From: kan.liang @ 2017-10-24 11:05 UTC (permalink / raw)
  To: tglx, peterz, mingo, linux-kernel; +Cc: acme, eranian, ak, Kan Liang

From: Kan Liang <Kan.liang@intel.com>

There are a number of free running counters introduced for uncore, which
provide highly valuable information to a wide array of customers.
For example, Skylake Server has IIO freerunning counters to collect
Input/Output x BW/Utilization.
The precious generic counters could be used to collect other customer
interested data.

The free running counter is read-only and always active. Current generic
uncore code does not support this kind of counters.

Introduce a new index to indicate the free running counters. Only one
index is enough for all freerunning counters. Because the free running
counter is always active, and the event and free running counter are
always 1:1 mapped. It doesnot need extra index to indicate the assigned
counter.
The free running counter is read-only. So it cannot be disabled/enabled,
which is specially handled in generic code. It doesnot need to be
tracked in the events list.
There is no overflow interrupt for free running counter. Use hrtimer to
periodically poll the counter to avoid overflow.

Introduce some rules to encode the event for free running counters.
- The event for free running counter has the same event code 0xff as the
  event for fixed counter.
- The umask of the event starts from 0x10. The umask which is less than
  0x10 is reserved for the event of fixed counter.
- The free running counters can be divided into different types
  according to the MSR location, bit width or definition. The start
  point of the umask for different type has 0x10 offset.
For example, there are three types of IIO free running counters on
Skylake server, IO CLOCKS counters, BANDWIDTH counters and UTILIZATION
counters.
The event code for all free running counters is 0xff.
'ioclk' is the first counter of IO CLOCKS. IO CLOCKS is the first
type of free running counters, which umask starts from 0x10.
So 'ioclk' is encoded as event=0xff,umask=0x10
'bw_in_port2' is the third counter of BANDWIDTH counters. BANDWIDTH
is the second type which umask starts from 0x20.
So 'bw_in_port2' is encoded as event=0xff,umask=0x22
The event codes for IIO free running counters on Skylake server can be
found in next patch.

Signed-off-by: Kan Liang <Kan.liang@intel.com>
---

Changes since V2:
 - Modify the changelog and comments.
 - The free running counter can be in MSR space or PCI space.
   Use generic name counter_base/offset to replace msr_base/off.
   The related function name is also changed accordingly
 - Use hrtimer to periodically poll the counter to avoid overflow.

 arch/x86/events/intel/uncore.c |  65 +++++++++++++++++++++--
 arch/x86/events/intel/uncore.h | 118 ++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 178 insertions(+), 5 deletions(-)

diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
index 3b8cd88..c05971b 100644
--- a/arch/x86/events/intel/uncore.c
+++ b/arch/x86/events/intel/uncore.c
@@ -203,7 +203,7 @@ static void uncore_assign_hw_event(struct intel_uncore_box *box,
 	hwc->idx = idx;
 	hwc->last_tag = ++box->tags[idx];
 
-	if (hwc->idx == UNCORE_PMC_IDX_FIXED) {
+	if (uncore_pmc_fixed(hwc->idx)) {
 		hwc->event_base = uncore_fixed_ctr(box);
 		hwc->config_base = uncore_fixed_ctl(box);
 		return;
@@ -218,7 +218,9 @@ void uncore_perf_event_update(struct intel_uncore_box *box, struct perf_event *e
 	u64 prev_count, new_count, delta;
 	int shift;
 
-	if (event->hw.idx == UNCORE_PMC_IDX_FIXED)
+	if (uncore_pmc_freerunning(event->hw.idx))
+		shift = 64 - uncore_freerunning_bits(box, event);
+	else if (uncore_pmc_fixed(event->hw.idx))
 		shift = 64 - uncore_fixed_ctr_bits(box);
 	else
 		shift = 64 - uncore_perf_ctr_bits(box);
@@ -454,10 +456,25 @@ static void uncore_pmu_event_start(struct perf_event *event, int flags)
 	struct intel_uncore_box *box = uncore_event_to_box(event);
 	int idx = event->hw.idx;
 
-	if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
+	if (WARN_ON_ONCE(idx == -1 || idx >= UNCORE_PMC_IDX_MAX))
 		return;
 
-	if (WARN_ON_ONCE(idx == -1 || idx >= UNCORE_PMC_IDX_MAX))
+	/*
+	 * Free running counter is read-only and always active.
+	 * Use the current counter value as start point.
+	 * There is no overflow interrupt for free running counter.
+	 * Use hrtimer to periodically poll the counter to avoid overflow.
+	 */
+	if (uncore_pmc_freerunning(event->hw.idx)) {
+		list_add_tail(&event->active_entry, &box->active_list);
+		local64_set(&event->hw.prev_count,
+			    uncore_read_counter(box, event));
+		if (box->n_active++ == 0)
+			uncore_pmu_start_hrtimer(box);
+		return;
+	}
+
+	if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
 		return;
 
 	event->hw.state = 0;
@@ -479,6 +496,15 @@ static void uncore_pmu_event_stop(struct perf_event *event, int flags)
 	struct intel_uncore_box *box = uncore_event_to_box(event);
 	struct hw_perf_event *hwc = &event->hw;
 
+	/* Cannot disable free running counter which is read-only */
+	if (uncore_pmc_freerunning(hwc->idx)) {
+		list_del(&event->active_entry);
+		if (--box->n_active == 0)
+			uncore_pmu_cancel_hrtimer(box);
+		uncore_perf_event_update(box, event);
+		return;
+	}
+
 	if (__test_and_clear_bit(hwc->idx, box->active_mask)) {
 		uncore_disable_event(box, event);
 		box->n_active--;
@@ -512,6 +538,17 @@ static int uncore_pmu_event_add(struct perf_event *event, int flags)
 	if (!box)
 		return -ENODEV;
 
+	/*
+	 * Doesn't need to assign free running counter for event.
+	 * Doesn't need to track the event in event_list.
+	 * They are 1:1 mapped. The free running counter is always active.
+	 */
+	if (uncore_pmc_freerunning(hwc->idx)) {
+		if (flags & PERF_EF_START)
+			uncore_pmu_event_start(event, 0);
+		return 0;
+	}
+
 	ret = n = uncore_collect_events(box, event, false);
 	if (ret < 0)
 		return ret;
@@ -570,6 +607,14 @@ static void uncore_pmu_event_del(struct perf_event *event, int flags)
 
 	uncore_pmu_event_stop(event, PERF_EF_UPDATE);
 
+	/*
+	 * Event for free running counter is not tracked by event_list.
+	 * Event and free running counter are 1:1 mapped.
+	 * Doesn't need to force event->hw.idx = -1 to reassign the counter.
+	 */
+	if (uncore_pmc_freerunning(event->hw.idx))
+		return;
+
 	for (i = 0; i < box->n_events; i++) {
 		if (event == box->event_list[i]) {
 			uncore_put_event_constraint(box, event);
@@ -603,6 +648,13 @@ static int uncore_validate_group(struct intel_uncore_pmu *pmu,
 	struct intel_uncore_box *fake_box;
 	int ret = -EINVAL, n;
 
+	/*
+	 * Event and free running counter are 1:1 mapped
+	 * which is always available.
+	 */
+	if (uncore_pmc_freerunning(event->hw.idx))
+		return 0;
+
 	fake_box = uncore_alloc_box(pmu->type, NUMA_NO_NODE);
 	if (!fake_box)
 		return -ENOMEM;
@@ -690,6 +742,11 @@ static int uncore_pmu_event_init(struct perf_event *event)
 
 		/* fixed counters have event field hardcoded to zero */
 		hwc->config = 0ULL;
+	} else if (is_freerunning_event(event)) {
+		if (!check_valid_freerunning_event(box, event))
+			return -EINVAL;
+		event->hw.idx = UNCORE_PMC_IDX_FREERUNNING;
+		event->hw.event_base = uncore_freerunning_event(box, event);
 	} else {
 		hwc->config = event->attr.config &
 			      (pmu->type->event_mask | ((u64)pmu->type->event_mask_ext << 32));
diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
index df5989f..0ed0e54 100644
--- a/arch/x86/events/intel/uncore.h
+++ b/arch/x86/events/intel/uncore.h
@@ -11,8 +11,13 @@
 
 #define UNCORE_FIXED_EVENT		0xff
 #define UNCORE_PMC_IDX_MAX_GENERIC	8
+#define UNCORE_PMC_IDX_MAX_FIXED	1
+#define UNCORE_PMC_IDX_MAX_FREERUNNING	1
 #define UNCORE_PMC_IDX_FIXED		UNCORE_PMC_IDX_MAX_GENERIC
-#define UNCORE_PMC_IDX_MAX		(UNCORE_PMC_IDX_FIXED + 1)
+#define UNCORE_PMC_IDX_FREERUNNING	(UNCORE_PMC_IDX_FIXED + \
+					UNCORE_PMC_IDX_MAX_FIXED)
+#define UNCORE_PMC_IDX_MAX		(UNCORE_PMC_IDX_FREERUNNING + \
+					UNCORE_PMC_IDX_MAX_FREERUNNING)
 
 #define UNCORE_PCI_DEV_FULL_DATA(dev, func, type, idx)	\
 		((dev << 24) | (func << 16) | (type << 8) | idx)
@@ -34,6 +39,7 @@ struct intel_uncore_ops;
 struct intel_uncore_pmu;
 struct intel_uncore_box;
 struct uncore_event_desc;
+struct freerunning_counters;
 
 struct intel_uncore_type {
 	const char *name;
@@ -41,6 +47,7 @@ struct intel_uncore_type {
 	int num_boxes;
 	int perf_ctr_bits;
 	int fixed_ctr_bits;
+	int num_freerunning_types;
 	unsigned perf_ctr;
 	unsigned event_ctl;
 	unsigned event_mask;
@@ -58,6 +65,7 @@ struct intel_uncore_type {
 	struct intel_uncore_pmu *pmus;
 	struct intel_uncore_ops *ops;
 	struct uncore_event_desc *event_descs;
+	struct freerunning_counters *freerunning;
 	const struct attribute_group *attr_groups[4];
 	struct pmu *pmu; /* for custom pmu ops */
 };
@@ -128,6 +136,14 @@ struct uncore_event_desc {
 	const char *config;
 };
 
+struct freerunning_counters {
+	unsigned int counter_base;
+	unsigned int counter_offset;
+	unsigned int box_offset;
+	unsigned int num_counters;
+	unsigned int bits;
+};
+
 struct pci2phy_map {
 	struct list_head list;
 	int segment;
@@ -156,6 +172,16 @@ static ssize_t __uncore_##_var##_show(struct kobject *kobj,		\
 static struct kobj_attribute format_attr_##_var =			\
 	__ATTR(_name, 0444, __uncore_##_var##_show, NULL)
 
+static inline bool uncore_pmc_fixed(int idx)
+{
+	return idx == UNCORE_PMC_IDX_FIXED;
+}
+
+static inline bool uncore_pmc_freerunning(int idx)
+{
+	return idx == UNCORE_PMC_IDX_FREERUNNING;
+}
+
 static inline unsigned uncore_pci_box_ctl(struct intel_uncore_box *box)
 {
 	return box->pmu->type->box_ctl;
@@ -213,6 +239,55 @@ static inline unsigned uncore_msr_fixed_ctr(struct intel_uncore_box *box)
 	return box->pmu->type->fixed_ctr + uncore_msr_box_offset(box);
 }
 
+
+/*
+ * Free running counter is similar as fixed counter, except it is read-only
+ * and always active when the uncore box is powered up.
+ *
+ * Here are the rules which are used to encode the event for free running
+ * counter.
+ * - The event for free running counter has the same event code 0xff as
+ *   the event for fixed counter.
+ * - The umask of the event starts from 0x10. The umask which is less
+ *   than 0x10 is reserved for the event of fixed counter.
+ * - The free running counters can be divided into different types according
+ *   to the MSR location, bit width or definition. The start point of the
+ *   umask for different type has 0x10 offset.
+ *
+ * For example, there are three types of IIO free running counters on Skylake
+ * server, IO CLOCKS counters, BANDWIDTH counters and UTILIZATION counters.
+ * The event code for all the free running counters is 0xff.
+ * 'ioclk' is the first counter of IO CLOCKS. IO CLOCKS is the first type,
+ * which umask starts from 0x10.
+ * So 'ioclk' is encoded as event=0xff,umask=0x10
+ * 'bw_in_port2' is the third counter of BANDWIDTH counters. BANDWIDTH is
+ * the second type, which umask starts from 0x20.
+ * So 'bw_in_port2' is encoded as event=0xff,umask=0x22
+ */
+static inline unsigned int uncore_freerunning_idx(u64 config)
+{
+	return ((config >> 8) & 0xf);
+}
+
+#define UNCORE_FREERUNNING_UMASK_START		0x10
+static inline unsigned int uncore_freerunning_type(u64 config)
+{
+	return ((((config >> 8) - UNCORE_FREERUNNING_UMASK_START) >> 4) & 0xf);
+}
+
+static inline
+unsigned int uncore_freerunning_event(struct intel_uncore_box *box,
+				      struct perf_event *event)
+{
+	unsigned int type = uncore_freerunning_type(event->attr.config);
+	unsigned int idx = uncore_freerunning_idx(event->attr.config);
+	struct intel_uncore_pmu *pmu = box->pmu;
+
+	return pmu->type->freerunning[type].counter_base +
+	       pmu->type->freerunning[type].counter_offset * idx +
+	       pmu->type->freerunning[type].box_offset * pmu->pmu_idx;
+}
+
 static inline
 unsigned uncore_msr_event_ctl(struct intel_uncore_box *box, int idx)
 {
@@ -275,11 +350,52 @@ static inline int uncore_fixed_ctr_bits(struct intel_uncore_box *box)
 	return box->pmu->type->fixed_ctr_bits;
 }
 
+static inline
+unsigned int uncore_freerunning_bits(struct intel_uncore_box *box,
+				     struct perf_event *event)
+{
+	unsigned int type = uncore_freerunning_type(event->attr.config);
+
+	return box->pmu->type->freerunning[type].bits;
+}
+
+static inline int uncore_num_freerunning(struct intel_uncore_box *box,
+					 struct perf_event *event)
+{
+	unsigned int type = uncore_freerunning_type(event->attr.config);
+
+	return box->pmu->type->freerunning[type].num_counters;
+}
+
+static inline int uncore_num_freerunning_types(struct intel_uncore_box *box,
+					       struct perf_event *event)
+{
+	return box->pmu->type->num_freerunning_types;
+}
+
+static inline bool check_valid_freerunning_event(struct intel_uncore_box *box,
+						 struct perf_event *event)
+{
+	unsigned int type = uncore_freerunning_type(event->attr.config);
+	unsigned int idx = uncore_freerunning_idx(event->attr.config);
+
+	return (type < uncore_num_freerunning_types(box, event)) &&
+	       (idx < uncore_num_freerunning(box, event));
+}
+
 static inline int uncore_num_counters(struct intel_uncore_box *box)
 {
 	return box->pmu->type->num_counters;
 }
 
+static inline bool is_freerunning_event(struct perf_event *event)
+{
+	u64 cfg = event->attr.config;
+
+	return ((cfg & UNCORE_FIXED_EVENT) == UNCORE_FIXED_EVENT) &&
+	       (((cfg >> 8) & 0xff) >= UNCORE_FREERUNNING_UMASK_START);
+}
+
 static inline void uncore_disable_box(struct intel_uncore_box *box)
 {
 	if (box->pmu->type->ops->disable_box)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH V3 3/5] perf/x86/intel/uncore: SKX support for IIO free running counter
  2017-10-24 11:05 [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for client IMC uncore kan.liang
  2017-10-24 11:05 ` [PATCH V3 2/5] perf/x86/intel/uncore: add infrastructure for free running counter kan.liang
@ 2017-10-24 11:05 ` kan.liang
  2017-10-24 11:05 ` [PATCH V3 4/5] perf/x86/intel/uncore: expose pmu counter operation functions kan.liang
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 17+ messages in thread
From: kan.liang @ 2017-10-24 11:05 UTC (permalink / raw)
  To: tglx, peterz, mingo, linux-kernel; +Cc: acme, eranian, ak, Kan Liang

From: Kan Liang <Kan.liang@intel.com>

As of Skylake Server, there are a number of free-running counters in
each IIO Box that collect counts for per box IO clocks and per Port
Input/Output x BW/Utilization.

The free running counter is read-only and always active. Counting will
be suspended only when the IIO Box is powered down.

There are three types of IIO free running counters on Skylake server, IO
CLOCKS counter, BANDWIDTH counters and UTILIZATION counters.
IO CLOCKS counter is to count IO clocks.
BANDWIDTH counters are to count inbound(PCIe->CPU)/outbound(CPU->PCIe)
bandwidth.
UTILIZATION counters are to count input/output utilization.

The bit width of the free running counters is 36-bit.

Signed-off-by: Kan Liang <Kan.liang@intel.com>
---

Changes since V2:
 - Use struct freerunning_counters to replace the old struct freerunning_msr

 arch/x86/events/intel/uncore_snbep.c | 58 ++++++++++++++++++++++++++++++++++++
 1 file changed, 58 insertions(+)

diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
index db1fe37..4ee1e86 100644
--- a/arch/x86/events/intel/uncore_snbep.c
+++ b/arch/x86/events/intel/uncore_snbep.c
@@ -3459,6 +3459,61 @@ static struct intel_uncore_ops skx_uncore_iio_ops = {
 	.read_counter		= uncore_msr_read_counter,
 };
 
+enum perf_uncore_iio_freerunning_type_id {
+	SKX_IIO_MSR_IOCLK			= 0,
+	SKX_IIO_MSR_BW				= 1,
+	SKX_IIO_MSR_UTIL			= 2,
+
+	SKX_IIO_FREERUNNING_TYPE_MAX,
+};
+
+
+static struct freerunning_counters skx_iio_freerunning[] = {
+	[SKX_IIO_MSR_IOCLK]	= { 0xa45, 0x1, 0x20, 1, 36 },
+	[SKX_IIO_MSR_BW]	= { 0xb00, 0x1, 0x10, 8, 36 },
+	[SKX_IIO_MSR_UTIL]	= { 0xb08, 0x1, 0x10, 8, 36 },
+};
+
+static struct uncore_event_desc skx_uncore_iio_events[] = {
+	/* Free-Running IO CLOCKS Counter */
+	INTEL_UNCORE_EVENT_DESC(ioclk,			"event=0xff,umask=0x10"),
+	/* Free-Running IIO BANDWIDTH Counters */
+	INTEL_UNCORE_EVENT_DESC(bw_in_port0,		"event=0xff,umask=0x20"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port0.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port0.unit,	"MiB"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port1,		"event=0xff,umask=0x21"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port1.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port1.unit,	"MiB"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port2,		"event=0xff,umask=0x22"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port2.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port2.unit,	"MiB"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port3,		"event=0xff,umask=0x23"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port3.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port3.unit,	"MiB"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port0,		"event=0xff,umask=0x24"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port0.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port0.unit,	"MiB"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port1,		"event=0xff,umask=0x25"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port1.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port1.unit,	"MiB"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port2,		"event=0xff,umask=0x26"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port2.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port2.unit,	"MiB"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port3,		"event=0xff,umask=0x27"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port3.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port3.unit,	"MiB"),
+	/* Free-running IIO UTILIZATION Counters */
+	INTEL_UNCORE_EVENT_DESC(util_in_port0,		"event=0xff,umask=0x30"),
+	INTEL_UNCORE_EVENT_DESC(util_out_port0,		"event=0xff,umask=0x31"),
+	INTEL_UNCORE_EVENT_DESC(util_in_port1,		"event=0xff,umask=0x32"),
+	INTEL_UNCORE_EVENT_DESC(util_out_port1,		"event=0xff,umask=0x33"),
+	INTEL_UNCORE_EVENT_DESC(util_in_port2,		"event=0xff,umask=0x34"),
+	INTEL_UNCORE_EVENT_DESC(util_out_port2,		"event=0xff,umask=0x35"),
+	INTEL_UNCORE_EVENT_DESC(util_in_port3,		"event=0xff,umask=0x36"),
+	INTEL_UNCORE_EVENT_DESC(util_out_port3,		"event=0xff,umask=0x37"),
+	{ /* end: all zeroes */ },
+};
+
 static struct intel_uncore_type skx_uncore_iio = {
 	.name			= "iio",
 	.num_counters		= 4,
@@ -3470,8 +3525,11 @@ static struct intel_uncore_type skx_uncore_iio = {
 	.event_mask_ext		= SKX_IIO_PMON_RAW_EVENT_MASK_EXT,
 	.box_ctl		= SKX_IIO0_MSR_PMON_BOX_CTL,
 	.msr_offset		= SKX_IIO_MSR_OFFSET,
+	.num_freerunning_types	= SKX_IIO_FREERUNNING_TYPE_MAX,
+	.freerunning		= skx_iio_freerunning,
 	.constraints		= skx_uncore_iio_constraints,
 	.ops			= &skx_uncore_iio_ops,
+	.event_descs		= skx_uncore_iio_events,
 	.format_group		= &skx_uncore_iio_format_group,
 };
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH V3 4/5] perf/x86/intel/uncore: expose pmu counter operation functions
  2017-10-24 11:05 [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for client IMC uncore kan.liang
  2017-10-24 11:05 ` [PATCH V3 2/5] perf/x86/intel/uncore: add infrastructure for free running counter kan.liang
  2017-10-24 11:05 ` [PATCH V3 3/5] perf/x86/intel/uncore: SKX support for IIO " kan.liang
@ 2017-10-24 11:05 ` kan.liang
  2017-10-24 11:05 ` [PATCH V3 5/5] perf/x86/intel/uncore: clean up client IMC uncore kan.liang
  2017-11-02 13:46 ` [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for " Thomas Gleixner
  4 siblings, 0 replies; 17+ messages in thread
From: kan.liang @ 2017-10-24 11:05 UTC (permalink / raw)
  To: tglx, peterz, mingo, linux-kernel; +Cc: acme, eranian, ak, Kan Liang

From: Kan Liang <Kan.liang@intel.com>

Some uncore has custom pmu. Usually, it is not fully customized.
Most of the counter operation functions can still use the generic code.
For example, it only needs to customize init() function for client
IMC uncore. Other counter operation functions,
add()/del()/start()/stop()/read(), can still use the generic code.

Expose the pmu counter operation functions
uncore_pmu_event_add/del/start/stop.

Signed-off-by: Kan Liang <Kan.liang@intel.com>
---

Changes since V2:
 - New patch

 arch/x86/events/intel/uncore.c | 8 ++++----
 arch/x86/events/intel/uncore.h | 4 ++++
 2 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
index c05971b..97ee379 100644
--- a/arch/x86/events/intel/uncore.c
+++ b/arch/x86/events/intel/uncore.c
@@ -451,7 +451,7 @@ static int uncore_assign_events(struct intel_uncore_box *box, int assign[], int
 	return ret ? -EINVAL : 0;
 }
 
-static void uncore_pmu_event_start(struct perf_event *event, int flags)
+void uncore_pmu_event_start(struct perf_event *event, int flags)
 {
 	struct intel_uncore_box *box = uncore_event_to_box(event);
 	int idx = event->hw.idx;
@@ -491,7 +491,7 @@ static void uncore_pmu_event_start(struct perf_event *event, int flags)
 	}
 }
 
-static void uncore_pmu_event_stop(struct perf_event *event, int flags)
+void uncore_pmu_event_stop(struct perf_event *event, int flags)
 {
 	struct intel_uncore_box *box = uncore_event_to_box(event);
 	struct hw_perf_event *hwc = &event->hw;
@@ -528,7 +528,7 @@ static void uncore_pmu_event_stop(struct perf_event *event, int flags)
 	}
 }
 
-static int uncore_pmu_event_add(struct perf_event *event, int flags)
+int uncore_pmu_event_add(struct perf_event *event, int flags)
 {
 	struct intel_uncore_box *box = uncore_event_to_box(event);
 	struct hw_perf_event *hwc = &event->hw;
@@ -600,7 +600,7 @@ static int uncore_pmu_event_add(struct perf_event *event, int flags)
 	return 0;
 }
 
-static void uncore_pmu_event_del(struct perf_event *event, int flags)
+void uncore_pmu_event_del(struct perf_event *event, int flags)
 {
 	struct intel_uncore_box *box = uncore_event_to_box(event);
 	int i;
diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
index 0ed0e54..2063fbd 100644
--- a/arch/x86/events/intel/uncore.h
+++ b/arch/x86/events/intel/uncore.h
@@ -461,6 +461,10 @@ struct intel_uncore_box *uncore_pmu_to_box(struct intel_uncore_pmu *pmu, int cpu
 u64 uncore_msr_read_counter(struct intel_uncore_box *box, struct perf_event *event);
 void uncore_pmu_start_hrtimer(struct intel_uncore_box *box);
 void uncore_pmu_cancel_hrtimer(struct intel_uncore_box *box);
+void uncore_pmu_event_start(struct perf_event *event, int flags);
+void uncore_pmu_event_stop(struct perf_event *event, int flags);
+int uncore_pmu_event_add(struct perf_event *event, int flags);
+void uncore_pmu_event_del(struct perf_event *event, int flags);
 void uncore_pmu_event_read(struct perf_event *event);
 void uncore_perf_event_update(struct intel_uncore_box *box, struct perf_event *event);
 struct event_constraint *
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH V3 5/5] perf/x86/intel/uncore: clean up client IMC uncore
  2017-10-24 11:05 [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for client IMC uncore kan.liang
                   ` (2 preceding siblings ...)
  2017-10-24 11:05 ` [PATCH V3 4/5] perf/x86/intel/uncore: expose pmu counter operation functions kan.liang
@ 2017-10-24 11:05 ` kan.liang
  2017-11-02 13:46 ` [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for " Thomas Gleixner
  4 siblings, 0 replies; 17+ messages in thread
From: kan.liang @ 2017-10-24 11:05 UTC (permalink / raw)
  To: tglx, peterz, mingo, linux-kernel; +Cc: acme, eranian, ak, Kan Liang

From: Kan Liang <Kan.liang@intel.com>

The counters in client IMC uncore are free running counters, not fixed
counters. It should be corrected. The new infrastructure for free
running counter should be applied.

Introduce free running counter type SNB_PCI_UNCORE_IMC_DATA for data
read and data write counters.

Keep the custom event_init() function compatible with old event
encoding.

Clean up other custom event_* functions.

Signed-off-by: Kan Liang <Kan.liang@intel.com>
---

Change since V2:
 - New patch to clean up client IMC uncore

 arch/x86/events/intel/uncore_snb.c | 127 ++++++-------------------------------
 1 file changed, 20 insertions(+), 107 deletions(-)

diff --git a/arch/x86/events/intel/uncore_snb.c b/arch/x86/events/intel/uncore_snb.c
index 9d5cd3f..29473e9 100644
--- a/arch/x86/events/intel/uncore_snb.c
+++ b/arch/x86/events/intel/uncore_snb.c
@@ -284,6 +284,15 @@ static struct uncore_event_desc snb_uncore_imc_events[] = {
 #define SNB_UNCORE_PCI_IMC_DATA_WRITES_BASE	0x5054
 #define SNB_UNCORE_PCI_IMC_CTR_BASE		SNB_UNCORE_PCI_IMC_DATA_READS_BASE
 
+enum perf_snb_uncore_imc_freerunning_types {
+	SNB_PCI_UNCORE_IMC_DATA		= 0,
+	SNB_PCI_UNCORE_IMC_FREERUNNING_TYPE_MAX,
+};
+
+static struct freerunning_counters snb_uncore_imc_freerunning[] = {
+	[SNB_PCI_UNCORE_IMC_DATA]     = { SNB_UNCORE_PCI_IMC_DATA_READS_BASE, 0x4, 0x0, 2, 32 },
+};
+
 static struct attribute *snb_uncore_imc_formats_attr[] = {
 	&format_attr_event.attr,
 	NULL,
@@ -340,9 +349,8 @@ static u64 snb_uncore_imc_read_counter(struct intel_uncore_box *box, struct perf
 }
 
 /*
- * custom event_init() function because we define our own fixed, free
- * running counters, so we do not want to conflict with generic uncore
- * logic. Also simplifies processing
+ * Keep the custom event_init() function compatible with old event
+ * encoding for free running counters.
  */
 static int snb_uncore_imc_event_init(struct perf_event *event)
 {
@@ -404,11 +412,11 @@ static int snb_uncore_imc_event_init(struct perf_event *event)
 	switch (cfg) {
 	case SNB_UNCORE_PCI_IMC_DATA_READS:
 		base = SNB_UNCORE_PCI_IMC_DATA_READS_BASE;
-		idx = UNCORE_PMC_IDX_FIXED;
+		idx = UNCORE_PMC_IDX_FREERUNNING;
 		break;
 	case SNB_UNCORE_PCI_IMC_DATA_WRITES:
 		base = SNB_UNCORE_PCI_IMC_DATA_WRITES_BASE;
-		idx = UNCORE_PMC_IDX_FIXED + 1;
+		idx = UNCORE_PMC_IDX_FREERUNNING;
 		break;
 	default:
 		return -EINVAL;
@@ -429,99 +437,6 @@ static int snb_uncore_imc_hw_config(struct intel_uncore_box *box, struct perf_ev
 	return 0;
 }
 
-static void snb_uncore_imc_event_start(struct perf_event *event, int flags)
-{
-	struct intel_uncore_box *box = uncore_event_to_box(event);
-	u64 count;
-
-	if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
-		return;
-
-	event->hw.state = 0;
-	box->n_active++;
-
-	list_add_tail(&event->active_entry, &box->active_list);
-
-	count = snb_uncore_imc_read_counter(box, event);
-	local64_set(&event->hw.prev_count, count);
-
-	if (box->n_active == 1)
-		uncore_pmu_start_hrtimer(box);
-}
-
-static void snb_uncore_imc_event_stop(struct perf_event *event, int flags)
-{
-	struct intel_uncore_box *box = uncore_event_to_box(event);
-	struct hw_perf_event *hwc = &event->hw;
-
-	if (!(hwc->state & PERF_HES_STOPPED)) {
-		box->n_active--;
-
-		WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
-		hwc->state |= PERF_HES_STOPPED;
-
-		list_del(&event->active_entry);
-
-		if (box->n_active == 0)
-			uncore_pmu_cancel_hrtimer(box);
-	}
-
-	if ((flags & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) {
-		/*
-		 * Drain the remaining delta count out of a event
-		 * that we are disabling:
-		 */
-		uncore_perf_event_update(box, event);
-		hwc->state |= PERF_HES_UPTODATE;
-	}
-}
-
-static int snb_uncore_imc_event_add(struct perf_event *event, int flags)
-{
-	struct intel_uncore_box *box = uncore_event_to_box(event);
-	struct hw_perf_event *hwc = &event->hw;
-
-	if (!box)
-		return -ENODEV;
-
-	hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
-	if (!(flags & PERF_EF_START))
-		hwc->state |= PERF_HES_ARCH;
-
-	snb_uncore_imc_event_start(event, 0);
-
-	return 0;
-}
-
-static void snb_uncore_imc_event_del(struct perf_event *event, int flags)
-{
-	snb_uncore_imc_event_stop(event, PERF_EF_UPDATE);
-}
-
-static void snb_uncore_imc_event_read(struct perf_event *event)
-{
-	struct intel_uncore_box *box = uncore_event_to_box(event);
-	u64 prev_count, new_count, delta;
-	int shift;
-
-	if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
-		shift = 64 - uncore_fixed_ctr_bits(box);
-	else
-		shift = 64 - uncore_perf_ctr_bits(box);
-
-	/* the hrtimer might modify the previous event value */
-again:
-	prev_count = local64_read(&event->hw.prev_count);
-	new_count = uncore_read_counter(box, event);
-	if (local64_xchg(&event->hw.prev_count, new_count) != prev_count)
-		goto again;
-
-	delta = (new_count << shift) - (prev_count << shift);
-	delta >>= shift;
-
-	local64_add(delta, &event->count);
-}
-
 int snb_pci2phy_map_init(int devid)
 {
 	struct pci_dev *dev = NULL;
@@ -553,11 +468,11 @@ int snb_pci2phy_map_init(int devid)
 static struct pmu snb_uncore_imc_pmu = {
 	.task_ctx_nr	= perf_invalid_context,
 	.event_init	= snb_uncore_imc_event_init,
-	.add		= snb_uncore_imc_event_add,
-	.del		= snb_uncore_imc_event_del,
-	.start		= snb_uncore_imc_event_start,
-	.stop		= snb_uncore_imc_event_stop,
-	.read		= snb_uncore_imc_event_read,
+	.add		= uncore_pmu_event_add,
+	.del		= uncore_pmu_event_del,
+	.start		= uncore_pmu_event_start,
+	.stop		= uncore_pmu_event_stop,
+	.read		= uncore_pmu_event_read,
 };
 
 static struct intel_uncore_ops snb_uncore_imc_ops = {
@@ -575,12 +490,10 @@ static struct intel_uncore_type snb_uncore_imc = {
 	.name		= "imc",
 	.num_counters   = 2,
 	.num_boxes	= 1,
-	.fixed_ctr_bits	= 32,
-	.fixed_ctr	= SNB_UNCORE_PCI_IMC_CTR_BASE,
+	.num_freerunning_types	= SNB_PCI_UNCORE_IMC_FREERUNNING_TYPE_MAX,
+	.freerunning	= snb_uncore_imc_freerunning,
 	.event_descs	= snb_uncore_imc_events,
 	.format_group	= &snb_uncore_imc_format_group,
-	.perf_ctr	= SNB_UNCORE_PCI_IMC_DATA_READS_BASE,
-	.event_mask	= SNB_UNCORE_PCI_IMC_EVENT_MASK,
 	.ops		= &snb_uncore_imc_ops,
 	.pmu		= &snb_uncore_imc_pmu,
 };
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for client IMC uncore
  2017-10-24 11:05 [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for client IMC uncore kan.liang
                   ` (3 preceding siblings ...)
  2017-10-24 11:05 ` [PATCH V3 5/5] perf/x86/intel/uncore: clean up client IMC uncore kan.liang
@ 2017-11-02 13:46 ` Thomas Gleixner
  2017-11-02 13:51   ` Liang, Kan
  4 siblings, 1 reply; 17+ messages in thread
From: Thomas Gleixner @ 2017-11-02 13:46 UTC (permalink / raw)
  To: Kan Liang; +Cc: peterz, mingo, linux-kernel, acme, eranian, ak

On Tue, 24 Oct 2017, kan.liang@intel.com wrote:
> -	if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
> +	if (event->hw.idx == UNCORE_PMC_IDX_FIXED)
>  		shift = 64 - uncore_fixed_ctr_bits(box);
>  	else
>  		shift = 64 - uncore_perf_ctr_bits(box);
> diff --git a/arch/x86/events/intel/uncore_snb.c b/arch/x86/events/intel/uncore_snb.c
> index db1127c..9d5cd3f 100644
> --- a/arch/x86/events/intel/uncore_snb.c
> +++ b/arch/x86/events/intel/uncore_snb.c
> @@ -498,6 +498,30 @@ static void snb_uncore_imc_event_del(struct perf_event *event, int flags)
>  	snb_uncore_imc_event_stop(event, PERF_EF_UPDATE);
>  }
>  
> +static void snb_uncore_imc_event_read(struct perf_event *event)
> +{
> +	struct intel_uncore_box *box = uncore_event_to_box(event);
> +	u64 prev_count, new_count, delta;
> +	int shift;
> +
> +	if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)

And this needs to be >= because?

> +		shift = 64 - uncore_fixed_ctr_bits(box);
> +	else
> +		shift = 64 - uncore_perf_ctr_bits(box);

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for client IMC uncore
  2017-11-02 13:46 ` [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for " Thomas Gleixner
@ 2017-11-02 13:51   ` Liang, Kan
  2017-11-02 13:54     ` Thomas Gleixner
  0 siblings, 1 reply; 17+ messages in thread
From: Liang, Kan @ 2017-11-02 13:51 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: peterz, mingo, linux-kernel, acme, eranian, ak

> On Tue, 24 Oct 2017, kan.liang@intel.com wrote:
> > -	if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
> > +	if (event->hw.idx == UNCORE_PMC_IDX_FIXED)
> >  		shift = 64 - uncore_fixed_ctr_bits(box);
> >  	else
> >  		shift = 64 - uncore_perf_ctr_bits(box); diff --git
> > a/arch/x86/events/intel/uncore_snb.c
> > b/arch/x86/events/intel/uncore_snb.c
> > index db1127c..9d5cd3f 100644
> > --- a/arch/x86/events/intel/uncore_snb.c
> > +++ b/arch/x86/events/intel/uncore_snb.c
> > @@ -498,6 +498,30 @@ static void snb_uncore_imc_event_del(struct
> perf_event *event, int flags)
> >  	snb_uncore_imc_event_stop(event, PERF_EF_UPDATE);  }
> >
> > +static void snb_uncore_imc_event_read(struct perf_event *event) {
> > +	struct intel_uncore_box *box = uncore_event_to_box(event);
> > +	u64 prev_count, new_count, delta;
> > +	int shift;
> > +
> > +	if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
> 
> And this needs to be >= because?

Patch 5/5 will clean up the client IMC uncore.
Before that, we still need it to make client IMC uncore work.

This patch isolates the >= case for client IMC uncore.

Thanks,
Kan

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for client IMC uncore
  2017-11-02 13:51   ` Liang, Kan
@ 2017-11-02 13:54     ` Thomas Gleixner
  2017-11-02 13:59       ` Thomas Gleixner
  0 siblings, 1 reply; 17+ messages in thread
From: Thomas Gleixner @ 2017-11-02 13:54 UTC (permalink / raw)
  To: Liang, Kan; +Cc: peterz, mingo, linux-kernel, acme, eranian, ak

On Thu, 2 Nov 2017, Liang, Kan wrote:
> > On Tue, 24 Oct 2017, kan.liang@intel.com wrote:
> > > -	if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
> > > +	if (event->hw.idx == UNCORE_PMC_IDX_FIXED)
> > >  		shift = 64 - uncore_fixed_ctr_bits(box);
> > >  	else
> > >  		shift = 64 - uncore_perf_ctr_bits(box); diff --git
> > > a/arch/x86/events/intel/uncore_snb.c
> > > b/arch/x86/events/intel/uncore_snb.c
> > > index db1127c..9d5cd3f 100644
> > > --- a/arch/x86/events/intel/uncore_snb.c
> > > +++ b/arch/x86/events/intel/uncore_snb.c
> > > @@ -498,6 +498,30 @@ static void snb_uncore_imc_event_del(struct
> > perf_event *event, int flags)
> > >  	snb_uncore_imc_event_stop(event, PERF_EF_UPDATE);  }
> > >
> > > +static void snb_uncore_imc_event_read(struct perf_event *event) {
> > > +	struct intel_uncore_box *box = uncore_event_to_box(event);
> > > +	u64 prev_count, new_count, delta;
> > > +	int shift;
> > > +
> > > +	if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
> > 
> > And this needs to be >= because?
> 
> Patch 5/5 will clean up the client IMC uncore.
> Before that, we still need it to make client IMC uncore work.
> 
> This patch isolates the >= case for client IMC uncore.

Fair enough. A comment to that effect (even when removed later) would have
avoided that question.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for client IMC uncore
  2017-11-02 13:54     ` Thomas Gleixner
@ 2017-11-02 13:59       ` Thomas Gleixner
  2017-11-02 14:06         ` Liang, Kan
  0 siblings, 1 reply; 17+ messages in thread
From: Thomas Gleixner @ 2017-11-02 13:59 UTC (permalink / raw)
  To: Liang, Kan; +Cc: peterz, mingo, linux-kernel, acme, eranian, ak

On Thu, 2 Nov 2017, Thomas Gleixner wrote:
> On Thu, 2 Nov 2017, Liang, Kan wrote:
> > > On Tue, 24 Oct 2017, kan.liang@intel.com wrote:
> > > > -	if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
> > > > +	if (event->hw.idx == UNCORE_PMC_IDX_FIXED)
> > > >  		shift = 64 - uncore_fixed_ctr_bits(box);
> > > >  	else
> > > >  		shift = 64 - uncore_perf_ctr_bits(box); diff --git
> > > > a/arch/x86/events/intel/uncore_snb.c
> > > > b/arch/x86/events/intel/uncore_snb.c
> > > > index db1127c..9d5cd3f 100644
> > > > --- a/arch/x86/events/intel/uncore_snb.c
> > > > +++ b/arch/x86/events/intel/uncore_snb.c
> > > > @@ -498,6 +498,30 @@ static void snb_uncore_imc_event_del(struct
> > > perf_event *event, int flags)
> > > >  	snb_uncore_imc_event_stop(event, PERF_EF_UPDATE);  }
> > > >
> > > > +static void snb_uncore_imc_event_read(struct perf_event *event) {
> > > > +	struct intel_uncore_box *box = uncore_event_to_box(event);
> > > > +	u64 prev_count, new_count, delta;
> > > > +	int shift;
> > > > +
> > > > +	if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
> > > 
> > > And this needs to be >= because?
> > 
> > Patch 5/5 will clean up the client IMC uncore.
> > Before that, we still need it to make client IMC uncore work.
> > 
> > This patch isolates the >= case for client IMC uncore.
> 
> Fair enough. A comment to that effect (even when removed later) would have
> avoided that question.

Thinking more about it. The current code only supports the fixed one,
right? So why would it deal with anything > FIXED?

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for client IMC uncore
  2017-11-02 13:59       ` Thomas Gleixner
@ 2017-11-02 14:06         ` Liang, Kan
  2017-11-02 14:11           ` Thomas Gleixner
  0 siblings, 1 reply; 17+ messages in thread
From: Liang, Kan @ 2017-11-02 14:06 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: peterz, mingo, linux-kernel, acme, eranian, ak

> On Thu, 2 Nov 2017, Thomas Gleixner wrote:
> > On Thu, 2 Nov 2017, Liang, Kan wrote:
> > > > On Tue, 24 Oct 2017, kan.liang@intel.com wrote:
> > > > > -	if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
> > > > > +	if (event->hw.idx == UNCORE_PMC_IDX_FIXED)
> > > > >  		shift = 64 - uncore_fixed_ctr_bits(box);
> > > > >  	else
> > > > >  		shift = 64 - uncore_perf_ctr_bits(box); diff --git
> > > > > a/arch/x86/events/intel/uncore_snb.c
> > > > > b/arch/x86/events/intel/uncore_snb.c
> > > > > index db1127c..9d5cd3f 100644
> > > > > --- a/arch/x86/events/intel/uncore_snb.c
> > > > > +++ b/arch/x86/events/intel/uncore_snb.c
> > > > > @@ -498,6 +498,30 @@ static void snb_uncore_imc_event_del(struct
> > > > perf_event *event, int flags)
> > > > >  	snb_uncore_imc_event_stop(event, PERF_EF_UPDATE);  }
> > > > >
> > > > > +static void snb_uncore_imc_event_read(struct perf_event *event) {
> > > > > +	struct intel_uncore_box *box = uncore_event_to_box(event);
> > > > > +	u64 prev_count, new_count, delta;
> > > > > +	int shift;
> > > > > +
> > > > > +	if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
> > > >
> > > > And this needs to be >= because?
> > >
> > > Patch 5/5 will clean up the client IMC uncore.
> > > Before that, we still need it to make client IMC uncore work.
> > >
> > > This patch isolates the >= case for client IMC uncore.
> >
> > Fair enough. A comment to that effect (even when removed later) would
> > have avoided that question.
> 
> Thinking more about it. The current code only supports the fixed one, right?
> So why would it deal with anything > FIXED?
> 

There are two free running counters in IMC.
To support the second one, the previous code implicitly do
UNCORE_PMC_IDX_FIXED + 1.
So it has to deal with > FIXED case.

	case SNB_UNCORE_PCI_IMC_DATA_READS:
		base = SNB_UNCORE_PCI_IMC_DATA_READS_BASE;
		idx = UNCORE_PMC_IDX_FIXED;
		break;
	case SNB_UNCORE_PCI_IMC_DATA_WRITES:
		base = SNB_UNCORE_PCI_IMC_DATA_WRITES_BASE;
		idx = UNCORE_PMC_IDX_FIXED + 1;
		break;
	default:
		return -EINVAL;

Thanks,
Kan

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for client IMC uncore
  2017-11-02 14:06         ` Liang, Kan
@ 2017-11-02 14:11           ` Thomas Gleixner
  2017-11-02 14:16             ` Thomas Gleixner
  0 siblings, 1 reply; 17+ messages in thread
From: Thomas Gleixner @ 2017-11-02 14:11 UTC (permalink / raw)
  To: Liang, Kan; +Cc: peterz, mingo, linux-kernel, acme, eranian, ak

On Thu, 2 Nov 2017, Liang, Kan wrote:
> > On Thu, 2 Nov 2017, Thomas Gleixner wrote:
> > > On Thu, 2 Nov 2017, Liang, Kan wrote:
> > > > > On Tue, 24 Oct 2017, kan.liang@intel.com wrote:
> > > > > > -	if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
> > > > > > +	if (event->hw.idx == UNCORE_PMC_IDX_FIXED)
> > > > > >  		shift = 64 - uncore_fixed_ctr_bits(box);
> > > > > >  	else
> > > > > >  		shift = 64 - uncore_perf_ctr_bits(box); diff --git
> > > > > > a/arch/x86/events/intel/uncore_snb.c
> > > > > > b/arch/x86/events/intel/uncore_snb.c
> > > > > > index db1127c..9d5cd3f 100644
> > > > > > --- a/arch/x86/events/intel/uncore_snb.c
> > > > > > +++ b/arch/x86/events/intel/uncore_snb.c
> > > > > > @@ -498,6 +498,30 @@ static void snb_uncore_imc_event_del(struct
> > > > > perf_event *event, int flags)
> > > > > >  	snb_uncore_imc_event_stop(event, PERF_EF_UPDATE);  }
> > > > > >
> > > > > > +static void snb_uncore_imc_event_read(struct perf_event *event) {
> > > > > > +	struct intel_uncore_box *box = uncore_event_to_box(event);
> > > > > > +	u64 prev_count, new_count, delta;
> > > > > > +	int shift;
> > > > > > +
> > > > > > +	if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
> > > > >
> > > > > And this needs to be >= because?
> > > >
> > > > Patch 5/5 will clean up the client IMC uncore.
> > > > Before that, we still need it to make client IMC uncore work.
> > > >
> > > > This patch isolates the >= case for client IMC uncore.
> > >
> > > Fair enough. A comment to that effect (even when removed later) would
> > > have avoided that question.
> > 
> > Thinking more about it. The current code only supports the fixed one, right?
> > So why would it deal with anything > FIXED?
> > 
> 
> There are two free running counters in IMC.
> To support the second one, the previous code implicitly do
> UNCORE_PMC_IDX_FIXED + 1.
> So it has to deal with > FIXED case.
> 
> 	case SNB_UNCORE_PCI_IMC_DATA_READS:
> 		base = SNB_UNCORE_PCI_IMC_DATA_READS_BASE;
> 		idx = UNCORE_PMC_IDX_FIXED;
> 		break;
> 	case SNB_UNCORE_PCI_IMC_DATA_WRITES:
> 		base = SNB_UNCORE_PCI_IMC_DATA_WRITES_BASE;
> 		idx = UNCORE_PMC_IDX_FIXED + 1;
> 		break;
> 	default:
> 		return -EINVAL;

Fugly that is, but as its cleaned up later....

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for client IMC uncore
  2017-11-02 14:11           ` Thomas Gleixner
@ 2017-11-02 14:16             ` Thomas Gleixner
  2017-11-02 14:27               ` Liang, Kan
  0 siblings, 1 reply; 17+ messages in thread
From: Thomas Gleixner @ 2017-11-02 14:16 UTC (permalink / raw)
  To: Liang, Kan; +Cc: peterz, mingo, linux-kernel, acme, eranian, ak

On Thu, 2 Nov 2017, Thomas Gleixner wrote:
> On Thu, 2 Nov 2017, Liang, Kan wrote:
> > > On Thu, 2 Nov 2017, Thomas Gleixner wrote:
> > > > On Thu, 2 Nov 2017, Liang, Kan wrote:
> > > > > Patch 5/5 will clean up the client IMC uncore.
> > > > > Before that, we still need it to make client IMC uncore work.
> > > > >
> > > > > This patch isolates the >= case for client IMC uncore.
> > > >
> > > > Fair enough. A comment to that effect (even when removed later) would
> > > > have avoided that question.
> > > 
> > > Thinking more about it. The current code only supports the fixed one, right?
> > > So why would it deal with anything > FIXED?
> > > 
> > 
> > There are two free running counters in IMC.
> > To support the second one, the previous code implicitly do
> > UNCORE_PMC_IDX_FIXED + 1.
> > So it has to deal with > FIXED case.
> > 
> > 	case SNB_UNCORE_PCI_IMC_DATA_READS:
> > 		base = SNB_UNCORE_PCI_IMC_DATA_READS_BASE;
> > 		idx = UNCORE_PMC_IDX_FIXED;
> > 		break;
> > 	case SNB_UNCORE_PCI_IMC_DATA_WRITES:
> > 		base = SNB_UNCORE_PCI_IMC_DATA_WRITES_BASE;
> > 		idx = UNCORE_PMC_IDX_FIXED + 1;
> > 		break;
> > 	default:
> > 		return -EINVAL;
> 
> Fugly that is, but as its cleaned up later....

But then you have this in uncore_perf_event_update():
 
-       if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
+       if (event->hw.idx == UNCORE_PMC_IDX_FIXED)

So how is that supposed to work?

I think your patch order is wrong and breaks bisectability all over the
place as you fixup the UNCORE_PMC_IDX_FIXED + 1 hackery in 5/5.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for client IMC uncore
  2017-11-02 14:16             ` Thomas Gleixner
@ 2017-11-02 14:27               ` Liang, Kan
  2017-11-02 14:56                 ` Thomas Gleixner
  0 siblings, 1 reply; 17+ messages in thread
From: Liang, Kan @ 2017-11-02 14:27 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: peterz, mingo, linux-kernel, acme, eranian, ak

> On Thu, 2 Nov 2017, Thomas Gleixner wrote:
> > On Thu, 2 Nov 2017, Liang, Kan wrote:
> > > > On Thu, 2 Nov 2017, Thomas Gleixner wrote:
> > > > > On Thu, 2 Nov 2017, Liang, Kan wrote:
> > > > > > Patch 5/5 will clean up the client IMC uncore.
> > > > > > Before that, we still need it to make client IMC uncore work.
> > > > > >
> > > > > > This patch isolates the >= case for client IMC uncore.
> > > > >
> > > > > Fair enough. A comment to that effect (even when removed later)
> > > > > would have avoided that question.
> > > >
> > > > Thinking more about it. The current code only supports the fixed one,
> right?
> > > > So why would it deal with anything > FIXED?
> > > >
> > >
> > > There are two free running counters in IMC.
> > > To support the second one, the previous code implicitly do
> > > UNCORE_PMC_IDX_FIXED + 1.
> > > So it has to deal with > FIXED case.
> > >
> > > 	case SNB_UNCORE_PCI_IMC_DATA_READS:
> > > 		base = SNB_UNCORE_PCI_IMC_DATA_READS_BASE;
> > > 		idx = UNCORE_PMC_IDX_FIXED;
> > > 		break;
> > > 	case SNB_UNCORE_PCI_IMC_DATA_WRITES:
> > > 		base = SNB_UNCORE_PCI_IMC_DATA_WRITES_BASE;
> > > 		idx = UNCORE_PMC_IDX_FIXED + 1;
> > > 		break;
> > > 	default:
> > > 		return -EINVAL;
> >
> > Fugly that is, but as its cleaned up later....
> 
> But then you have this in uncore_perf_event_update():
> 
> -       if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
> +       if (event->hw.idx == UNCORE_PMC_IDX_FIXED)
> 
> So how is that supposed to work?

This is for generic code. 

In previous code, the event_read function for IMC use generic code.
So we have to deal with >= in generic code.

Now, customized event_read function 'snb_uncore_imc_event_read'
 is introduced for IMC. So IMC does not touch the generic code. 
The generic code is corrected here.

Thanks,
Kan
> 
> I think your patch order is wrong and breaks bisectability all over the place as
> you fixup the UNCORE_PMC_IDX_FIXED + 1 hackery in 5/5.
> 
> Thanks,
> 
> 	tglx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for client IMC uncore
  2017-11-02 14:27               ` Liang, Kan
@ 2017-11-02 14:56                 ` Thomas Gleixner
  2017-11-02 15:10                   ` Liang, Kan
  0 siblings, 1 reply; 17+ messages in thread
From: Thomas Gleixner @ 2017-11-02 14:56 UTC (permalink / raw)
  To: Liang, Kan; +Cc: peterz, mingo, linux-kernel, acme, eranian, ak

On Thu, 2 Nov 2017, Liang, Kan wrote:
> > On Thu, 2 Nov 2017, Thomas Gleixner wrote:
> > But then you have this in uncore_perf_event_update():
> > 
> > -       if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
> > +       if (event->hw.idx == UNCORE_PMC_IDX_FIXED)
> > 
> > So how is that supposed to work?
> 
> This is for generic code. 
> 
> In previous code, the event_read function for IMC use generic code.
> So we have to deal with >= in generic code.
> 
> Now, customized event_read function 'snb_uncore_imc_event_read'
>  is introduced for IMC. So IMC does not touch the generic code. 
> The generic code is corrected here.

The customized read function does not help at all.

uncore_perf_event_update() is used from snb_uncore_imc_event_stop(). So
it's broken with this patch.

This is a complete and utter mess to review.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for client IMC uncore
  2017-11-02 14:56                 ` Thomas Gleixner
@ 2017-11-02 15:10                   ` Liang, Kan
  2017-11-02 15:39                     ` Thomas Gleixner
  0 siblings, 1 reply; 17+ messages in thread
From: Liang, Kan @ 2017-11-02 15:10 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: peterz, mingo, linux-kernel, acme, eranian, ak

> On Thu, 2 Nov 2017, Liang, Kan wrote:
> > > On Thu, 2 Nov 2017, Thomas Gleixner wrote:
> > > But then you have this in uncore_perf_event_update():
> > >
> > > -       if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
> > > +       if (event->hw.idx == UNCORE_PMC_IDX_FIXED)
> > >
> > > So how is that supposed to work?
> >
> > This is for generic code.
> >
> > In previous code, the event_read function for IMC use generic code.
> > So we have to deal with >= in generic code.
> >
> > Now, customized event_read function 'snb_uncore_imc_event_read'
> >  is introduced for IMC. So IMC does not touch the generic code.
> > The generic code is corrected here.
> 
> The customized read function does not help at all.
> 
> uncore_perf_event_update() is used from snb_uncore_imc_event_stop(). So
> it's broken with this patch.

Right, need to use the customized read function to replace it as well.
I will fix it in next version.

> 
> This is a complete and utter mess to review.

Most of the customized functions will be clean up after the series is applied.

Thanks,
Kan

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for client IMC uncore
  2017-11-02 15:10                   ` Liang, Kan
@ 2017-11-02 15:39                     ` Thomas Gleixner
  2017-11-02 15:57                       ` Liang, Kan
  0 siblings, 1 reply; 17+ messages in thread
From: Thomas Gleixner @ 2017-11-02 15:39 UTC (permalink / raw)
  To: Liang, Kan; +Cc: peterz, mingo, linux-kernel, acme, eranian, ak

On Thu, 2 Nov 2017, Liang, Kan wrote:
> > On Thu, 2 Nov 2017, Liang, Kan wrote:
> > > > On Thu, 2 Nov 2017, Thomas Gleixner wrote:
> > > > But then you have this in uncore_perf_event_update():
> > > >
> > > > -       if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
> > > > +       if (event->hw.idx == UNCORE_PMC_IDX_FIXED)
> > > >
> > > > So how is that supposed to work?
> > >
> > > This is for generic code.
> > >
> > > In previous code, the event_read function for IMC use generic code.
> > > So we have to deal with >= in generic code.
> > >
> > > Now, customized event_read function 'snb_uncore_imc_event_read'
> > >  is introduced for IMC. So IMC does not touch the generic code.
> > > The generic code is corrected here.
> > 
> > The customized read function does not help at all.
> > 
> > uncore_perf_event_update() is used from snb_uncore_imc_event_stop(). So
> > it's broken with this patch.
> 
> Right, need to use the customized read function to replace it as well.
> I will fix it in next version.
> 
> > 
> > This is a complete and utter mess to review.
> 
> Most of the customized functions will be clean up after the series is applied.

Correct, but please try to split 2/5 into parts as well. It's a hodgepodge
of new things and modifications to the code. The logical split is to add
the new data structures and struct members along with the inline helpers
and then in the next patch make use of it.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for client IMC uncore
  2017-11-02 15:39                     ` Thomas Gleixner
@ 2017-11-02 15:57                       ` Liang, Kan
  0 siblings, 0 replies; 17+ messages in thread
From: Liang, Kan @ 2017-11-02 15:57 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: peterz, mingo, linux-kernel, acme, eranian, ak

> On Thu, 2 Nov 2017, Liang, Kan wrote:
> > > On Thu, 2 Nov 2017, Liang, Kan wrote:
> > > > > On Thu, 2 Nov 2017, Thomas Gleixner wrote:
> > > > > But then you have this in uncore_perf_event_update():
> > > > >
> > > > > -       if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
> > > > > +       if (event->hw.idx == UNCORE_PMC_IDX_FIXED)
> > > > >
> > > > > So how is that supposed to work?
> > > >
> > > > This is for generic code.
> > > >
> > > > In previous code, the event_read function for IMC use generic code.
> > > > So we have to deal with >= in generic code.
> > > >
> > > > Now, customized event_read function 'snb_uncore_imc_event_read'
> > > >  is introduced for IMC. So IMC does not touch the generic code.
> > > > The generic code is corrected here.
> > >
> > > The customized read function does not help at all.
> > >
> > > uncore_perf_event_update() is used from snb_uncore_imc_event_stop().
> So
> > > it's broken with this patch.
> >
> > Right, need to use the customized read function to replace it as well.
> > I will fix it in next version.
> >
> > >
> > > This is a complete and utter mess to review.
> >
> > Most of the customized functions will be clean up after the series is applied.
> 
> Correct, but please try to split 2/5 into parts as well. It's a hodgepodge
> of new things and modifications to the code. The logical split is to add
> the new data structures and struct members along with the inline helpers
> and then in the next patch make use of it.
>

Sure, I will do that.

Also, I just noticed that the '>=' is used in nhmex_uncore_msr_enable_event
as well, which does not make sense and unnecessary.
For Nehalem and Westmere, there is only one fixed counter for W-Box.
It's nhm specific issue, not related to generic code.
I will also clean it up in next version. 
 
Thanks,
Kan

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2017-11-02 15:57 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-24 11:05 [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for client IMC uncore kan.liang
2017-10-24 11:05 ` [PATCH V3 2/5] perf/x86/intel/uncore: add infrastructure for free running counter kan.liang
2017-10-24 11:05 ` [PATCH V3 3/5] perf/x86/intel/uncore: SKX support for IIO " kan.liang
2017-10-24 11:05 ` [PATCH V3 4/5] perf/x86/intel/uncore: expose pmu counter operation functions kan.liang
2017-10-24 11:05 ` [PATCH V3 5/5] perf/x86/intel/uncore: clean up client IMC uncore kan.liang
2017-11-02 13:46 ` [PATCH V3 1/5] perf/x86/intel/uncore: customized pmu event read for " Thomas Gleixner
2017-11-02 13:51   ` Liang, Kan
2017-11-02 13:54     ` Thomas Gleixner
2017-11-02 13:59       ` Thomas Gleixner
2017-11-02 14:06         ` Liang, Kan
2017-11-02 14:11           ` Thomas Gleixner
2017-11-02 14:16             ` Thomas Gleixner
2017-11-02 14:27               ` Liang, Kan
2017-11-02 14:56                 ` Thomas Gleixner
2017-11-02 15:10                   ` Liang, Kan
2017-11-02 15:39                     ` Thomas Gleixner
2017-11-02 15:57                       ` Liang, Kan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.