All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V2 1/4] perf/x86/intel/uncore: use same idx for clinet IMC uncore events
@ 2017-10-19 16:55 kan.liang
  2017-10-19 16:55 ` [PATCH V2 2/4] perf/x86/intel/uncore: inline function to check the fixed counter event kan.liang
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: kan.liang @ 2017-10-19 16:55 UTC (permalink / raw)
  To: tglx, peterz, mingo, linux-kernel; +Cc: acme, eranian, ak, Kan Liang

From: Kan Liang <Kan.liang@intel.com>

The clinet IMC uncore is the only one who claims two 'fixed counters'.
To specially handle it, event->hw.idx >= UNCORE_PMC_IDX_FIXED is used to
check fixed counters in the generic uncore_perf_event_update.
It does not have problem in current code. Because there are no counters
whose idx is larger than fixed counters. However, it will have problem
if new counter type is introduced in generic code. For example,
freerunning counters.

Actually, the 'fixed counters' in the clinet IMC uncore is not
traditional fixed counter. They are freerunning counters, which don't
need the idx to indicate which counter is assigned. They also have same
bits wide. So it's OK to let them use the same idx. event_base is good
enough to select the proper freerunning counter.

There is no traditional fixed counter in clinet IMC uncore. Let them use
the same idx as fixed event for clinet IMC uncore events.

The following patch will remove the special codes in generic
uncore_perf_event_update.

Signed-off-by: Kan Liang <Kan.liang@intel.com>
---

Changes since V1:
 - New file to address check event->hw.idx >= UNCORE_PMC_IDX_FIXED

 arch/x86/events/intel/uncore_snb.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/x86/events/intel/uncore_snb.c b/arch/x86/events/intel/uncore_snb.c
index db1127c..107e772 100644
--- a/arch/x86/events/intel/uncore_snb.c
+++ b/arch/x86/events/intel/uncore_snb.c
@@ -400,6 +400,13 @@ static int snb_uncore_imc_event_init(struct perf_event *event)
 	event->hw.branch_reg.idx = EXTRA_REG_NONE;
 	/*
 	 * check event is known (whitelist, determines counter)
+	 *
+	 * The events and freerunning counters are 1:1 mapped.
+	 * The freerunning counters are always available.
+	 * It doesn't need hw.idx to indicate which counter is assigned.
+	 * There is no traditional fixed counter support for client IMC.
+	 * So let them reuse the same idx as fixed event.
+	 * Base will be used to get the proper freerunning counter.
 	 */
 	switch (cfg) {
 	case SNB_UNCORE_PCI_IMC_DATA_READS:
@@ -408,7 +415,7 @@ static int snb_uncore_imc_event_init(struct perf_event *event)
 		break;
 	case SNB_UNCORE_PCI_IMC_DATA_WRITES:
 		base = SNB_UNCORE_PCI_IMC_DATA_WRITES_BASE;
-		idx = UNCORE_PMC_IDX_FIXED + 1;
+		idx = UNCORE_PMC_IDX_FIXED;
 		break;
 	default:
 		return -EINVAL;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH V2 2/4] perf/x86/intel/uncore: inline function to check the fixed counter event
  2017-10-19 16:55 [PATCH V2 1/4] perf/x86/intel/uncore: use same idx for clinet IMC uncore events kan.liang
@ 2017-10-19 16:55 ` kan.liang
  2017-10-20 14:13   ` Thomas Gleixner
  2017-10-19 16:55 ` [PATCH V2 3/4] perf/x86/intel/uncore: add infrastructure for freerunning counters kan.liang
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 9+ messages in thread
From: kan.liang @ 2017-10-19 16:55 UTC (permalink / raw)
  To: tglx, peterz, mingo, linux-kernel; +Cc: acme, eranian, ak, Kan Liang

From: Kan Liang <Kan.liang@intel.com>

Remove the special codes in generic uncore_perf_event_update.
Introduce inline function to check the fixed counter event.

Signed-off-by: Kan Liang <Kan.liang@intel.com>
---

Changes since V1:
 - New file to address check event->hw.idx >= UNCORE_PMC_IDX_FIXED

 arch/x86/events/intel/uncore.c | 4 ++--
 arch/x86/events/intel/uncore.h | 5 +++++
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
index 1c5390f..76c1c78 100644
--- a/arch/x86/events/intel/uncore.c
+++ b/arch/x86/events/intel/uncore.c
@@ -203,7 +203,7 @@ static void uncore_assign_hw_event(struct intel_uncore_box *box,
 	hwc->idx = idx;
 	hwc->last_tag = ++box->tags[idx];
 
-	if (hwc->idx == UNCORE_PMC_IDX_FIXED) {
+	if (uncore_pmc_fixed(hwc->idx)) {
 		hwc->event_base = uncore_fixed_ctr(box);
 		hwc->config_base = uncore_fixed_ctl(box);
 		return;
@@ -218,7 +218,7 @@ void uncore_perf_event_update(struct intel_uncore_box *box, struct perf_event *e
 	u64 prev_count, new_count, delta;
 	int shift;
 
-	if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
+	if (uncore_pmc_fixed(event->hw.idx))
 		shift = 64 - uncore_fixed_ctr_bits(box);
 	else
 		shift = 64 - uncore_perf_ctr_bits(box);
diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
index df5989f..0ff08fba 100644
--- a/arch/x86/events/intel/uncore.h
+++ b/arch/x86/events/intel/uncore.h
@@ -156,6 +156,11 @@ static ssize_t __uncore_##_var##_show(struct kobject *kobj,		\
 static struct kobj_attribute format_attr_##_var =			\
 	__ATTR(_name, 0444, __uncore_##_var##_show, NULL)
 
+static inline bool uncore_pmc_fixed(int idx)
+{
+	return (idx == UNCORE_PMC_IDX_FIXED);
+}
+
 static inline unsigned uncore_pci_box_ctl(struct intel_uncore_box *box)
 {
 	return box->pmu->type->box_ctl;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH V2 3/4] perf/x86/intel/uncore: add infrastructure for freerunning counters
  2017-10-19 16:55 [PATCH V2 1/4] perf/x86/intel/uncore: use same idx for clinet IMC uncore events kan.liang
  2017-10-19 16:55 ` [PATCH V2 2/4] perf/x86/intel/uncore: inline function to check the fixed counter event kan.liang
@ 2017-10-19 16:55 ` kan.liang
  2017-10-20 14:15   ` Thomas Gleixner
  2017-10-19 16:55 ` [PATCH V2 4/4] perf/x86/intel/uncore: SKX support for IIO freerunning counter kan.liang
  2017-10-20 14:12 ` [PATCH V2 1/4] perf/x86/intel/uncore: use same idx for clinet IMC uncore events Thomas Gleixner
  3 siblings, 1 reply; 9+ messages in thread
From: kan.liang @ 2017-10-19 16:55 UTC (permalink / raw)
  To: tglx, peterz, mingo, linux-kernel; +Cc: acme, eranian, ak, Kan Liang

From: Kan Liang <Kan.liang@intel.com>

There are a number of freerunning counters introduced for uncore.
For example, Skylake Server has IIO freerunning counters to collect
Input/Output x BW/Utilization.

The freerunning counter is similar as fixed counter, except it cannot
be written by SW. It needs to be specially handled in generic code and
not added in box->events list.

Introduce a new idx to indicate the freerunning counter. Only one idx is
enough for all freerunning counters. Because event and freerunning
counter are always 1:1 mapped. The freerunning counter is always
available. It doesn't need extra idx to indicate the assigned counter.

The event code for freerunning event is shared with fixed event, which
is 0xff. The umask of freerunning event starts from 0x10. The umask less
than 0x10 is reserved for fixed event.

The Freerunning counters could have different MSR location and offset.
Accordingly, they are divided into different types. Each type is limited
to only have at most 16 events.
So the umask of the first free running events type starts from 0x10. The
umask of the second starts from 0x20. The rest can be done in the same
manner.

Signed-off-by: Kan Liang <Kan.liang@intel.com>
---

Changes since V1:
 - Split the patch for generic change
 - Add more comments
 - Use unsigned int to replace unsigned
 - s/type/types/ for num_free_running_type
 - Use unified name freerunning

 arch/x86/events/intel/uncore.c |  61 +++++++++++++++++++++++--
 arch/x86/events/intel/uncore.h | 101 ++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 158 insertions(+), 4 deletions(-)

diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
index 76c1c78..651046c 100644
--- a/arch/x86/events/intel/uncore.c
+++ b/arch/x86/events/intel/uncore.c
@@ -218,7 +218,9 @@ void uncore_perf_event_update(struct intel_uncore_box *box, struct perf_event *e
 	u64 prev_count, new_count, delta;
 	int shift;
 
-	if (uncore_pmc_fixed(event->hw.idx))
+	if (uncore_pmc_freerunning(event->hw.idx))
+		shift = 64 - uncore_freerunning_bits(box, event);
+	else if (uncore_pmc_fixed(event->hw.idx))
 		shift = 64 - uncore_fixed_ctr_bits(box);
 	else
 		shift = 64 - uncore_perf_ctr_bits(box);
@@ -362,6 +364,10 @@ uncore_collect_events(struct intel_uncore_box *box, struct perf_event *leader,
 		if (n >= max_count)
 			return -EINVAL;
 
+		/* freerunning event is not tracked by box->events list */
+		if (uncore_pmc_freerunning(event->hw.idx))
+			continue;
+
 		box->event_list[n] = event;
 		n++;
 	}
@@ -454,10 +460,21 @@ static void uncore_pmu_event_start(struct perf_event *event, int flags)
 	struct intel_uncore_box *box = uncore_event_to_box(event);
 	int idx = event->hw.idx;
 
-	if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
+	if (WARN_ON_ONCE(idx == -1 || idx >= UNCORE_PMC_IDX_MAX))
 		return;
 
-	if (WARN_ON_ONCE(idx == -1 || idx >= UNCORE_PMC_IDX_MAX))
+	/*
+	 * Freerunning counters cannot be written by SW.
+	 * Does not need to enable it or add the event to box->events list.
+	 * Use current value as start point.
+	 */
+	if (uncore_pmc_freerunning(event->hw.idx)) {
+		local64_set(&event->hw.prev_count,
+			    uncore_read_counter(box, event));
+		return;
+	}
+
+	if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
 		return;
 
 	event->hw.state = 0;
@@ -479,6 +496,15 @@ static void uncore_pmu_event_stop(struct perf_event *event, int flags)
 	struct intel_uncore_box *box = uncore_event_to_box(event);
 	struct hw_perf_event *hwc = &event->hw;
 
+	/*
+	 * Does not need to disable freerunning counters.
+	 * Read current value as end.
+	 */
+	if (uncore_pmc_freerunning(hwc->idx)) {
+		uncore_perf_event_update(box, event);
+		return;
+	}
+
 	if (__test_and_clear_bit(hwc->idx, box->active_mask)) {
 		uncore_disable_event(box, event);
 		box->n_active--;
@@ -512,6 +538,17 @@ static int uncore_pmu_event_add(struct perf_event *event, int flags)
 	if (!box)
 		return -ENODEV;
 
+	/*
+	 * Event and freerunning counters are 1:1 mapped
+	 * Don't need to assign event.
+	 */
+	if (uncore_pmc_freerunning(hwc->idx)) {
+		event->hw.event_base = uncore_freerunning_msr(box, event);
+		if (flags & PERF_EF_START)
+			uncore_pmu_event_start(event, 0);
+		return 0;
+	}
+
 	ret = n = uncore_collect_events(box, event, false);
 	if (ret < 0)
 		return ret;
@@ -570,6 +607,13 @@ static void uncore_pmu_event_del(struct perf_event *event, int flags)
 
 	uncore_pmu_event_stop(event, PERF_EF_UPDATE);
 
+	/*
+	 * Freerunning counters cannot be written by SW.
+	 * No need to force event->hw.idx = -1 to reassign the counter.
+	 */
+	if (uncore_pmc_freerunning(event->hw.idx))
+		return;
+
 	for (i = 0; i < box->n_events; i++) {
 		if (event == box->event_list[i]) {
 			uncore_put_event_constraint(box, event);
@@ -603,6 +647,13 @@ static int uncore_validate_group(struct intel_uncore_pmu *pmu,
 	struct intel_uncore_box *fake_box;
 	int ret = -EINVAL, n;
 
+	/*
+	 * Event and freerunning counters are 1:1 mapped,
+	 * which is always available.
+	 */
+	if (uncore_pmc_freerunning(event->hw.idx))
+		return 0;
+
 	fake_box = uncore_alloc_box(pmu->type, NUMA_NO_NODE);
 	if (!fake_box)
 		return -ENOMEM;
@@ -690,6 +741,10 @@ static int uncore_pmu_event_init(struct perf_event *event)
 
 		/* fixed counters have event field hardcoded to zero */
 		hwc->config = 0ULL;
+	} else if (is_freerunning_event(event)) {
+		if (!check_valid_freerunning_event(box, event))
+			return -EINVAL;
+		event->hw.idx = UNCORE_PMC_IDX_FREERUNNING;
 	} else {
 		hwc->config = event->attr.config &
 			      (pmu->type->event_mask | ((u64)pmu->type->event_mask_ext << 32));
diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
index 0ff08fba..3ecf2c4 100644
--- a/arch/x86/events/intel/uncore.h
+++ b/arch/x86/events/intel/uncore.h
@@ -11,8 +11,13 @@
 
 #define UNCORE_FIXED_EVENT		0xff
 #define UNCORE_PMC_IDX_MAX_GENERIC	8
+#define UNCORE_PMC_IDX_MAX_FIXED	1
+#define UNCORE_PMC_IDX_MAX_FREERUNNING	1
 #define UNCORE_PMC_IDX_FIXED		UNCORE_PMC_IDX_MAX_GENERIC
-#define UNCORE_PMC_IDX_MAX		(UNCORE_PMC_IDX_FIXED + 1)
+#define UNCORE_PMC_IDX_FREERUNNING	(UNCORE_PMC_IDX_FIXED + \
+					UNCORE_PMC_IDX_MAX_FIXED)
+#define UNCORE_PMC_IDX_MAX		(UNCORE_PMC_IDX_FREERUNNING + \
+					UNCORE_PMC_IDX_MAX_FREERUNNING)
 
 #define UNCORE_PCI_DEV_FULL_DATA(dev, func, type, idx)	\
 		((dev << 24) | (func << 16) | (type << 8) | idx)
@@ -34,6 +39,7 @@ struct intel_uncore_ops;
 struct intel_uncore_pmu;
 struct intel_uncore_box;
 struct uncore_event_desc;
+struct freerunning_msr;
 
 struct intel_uncore_type {
 	const char *name;
@@ -41,6 +47,7 @@ struct intel_uncore_type {
 	int num_boxes;
 	int perf_ctr_bits;
 	int fixed_ctr_bits;
+	int num_freerunning_types;
 	unsigned perf_ctr;
 	unsigned event_ctl;
 	unsigned event_mask;
@@ -58,6 +65,7 @@ struct intel_uncore_type {
 	struct intel_uncore_pmu *pmus;
 	struct intel_uncore_ops *ops;
 	struct uncore_event_desc *event_descs;
+	struct freerunning_msr *freerunning;
 	const struct attribute_group *attr_groups[4];
 	struct pmu *pmu; /* for custom pmu ops */
 };
@@ -128,6 +136,13 @@ struct uncore_event_desc {
 	const char *config;
 };
 
+struct freerunning_msr {
+	unsigned int msr_base;
+	unsigned int msr_off;
+	unsigned int num_counters;
+	unsigned int bits;
+};
+
 struct pci2phy_map {
 	struct list_head list;
 	int segment;
@@ -161,6 +176,11 @@ static inline bool uncore_pmc_fixed(int idx)
 	return (idx == UNCORE_PMC_IDX_FIXED);
 }
 
+static inline bool uncore_pmc_freerunning(int idx)
+{
+	return (idx == UNCORE_PMC_IDX_FREERUNNING);
+}
+
 static inline unsigned uncore_pci_box_ctl(struct intel_uncore_box *box)
 {
 	return box->pmu->type->box_ctl;
@@ -218,6 +238,44 @@ static inline unsigned uncore_msr_fixed_ctr(struct intel_uncore_box *box)
 	return box->pmu->type->fixed_ctr + uncore_msr_box_offset(box);
 }
 
+
+/*
+ * Freerunning counter is similar as fixed counter, except it cannot be
+ * written by SW.
+ *
+ * Freerunning MSR events have the same event code 0xff as fixed events.
+ * The Freerunning events umask starts from 0x10.
+ * The umask which is less than 0x10 is reserved for fixed events.
+ *
+ * The Freerunning events are divided into different types according to
+ * MSR location, bit width or definition. Each type is limited to only have
+ * at most 16 events.
+ * So the umask of first type starts from 0x10, the second starts from 0x20,
+ * the rest can be done in the same manner.
+ */
+#define UNCORE_FREERUNNING_MSR_START		0x10
+static inline unsigned int uncore_freerunning_msr_idx(u64 config)
+{
+	return ((config >> 8) & 0xf);
+}
+
+static inline unsigned int uncore_freerunning_msr_type(u64 config)
+{
+	return ((((config >> 8) - UNCORE_FREERUNNING_MSR_START) >> 4) & 0xf);
+}
+
+static inline
+unsigned int uncore_freerunning_msr(struct intel_uncore_box *box,
+				    struct perf_event *event)
+{
+	unsigned int type = uncore_freerunning_msr_type(event->attr.config);
+	unsigned int idx = uncore_freerunning_msr_idx(event->attr.config);
+	struct intel_uncore_pmu *pmu = box->pmu;
+
+	return pmu->type->freerunning[type].msr_base + idx +
+	       pmu->type->freerunning[type].msr_off * pmu->pmu_idx;
+}
+
 static inline
 unsigned uncore_msr_event_ctl(struct intel_uncore_box *box, int idx)
 {
@@ -280,11 +338,52 @@ static inline int uncore_fixed_ctr_bits(struct intel_uncore_box *box)
 	return box->pmu->type->fixed_ctr_bits;
 }
 
+static inline
+unsigned int uncore_freerunning_bits(struct intel_uncore_box *box,
+				     struct perf_event *event)
+{
+	unsigned int type = uncore_freerunning_msr_type(event->attr.config);
+
+	return box->pmu->type->freerunning[type].bits;
+}
+
+static inline int uncore_num_freerunning(struct intel_uncore_box *box,
+					 struct perf_event *event)
+{
+	unsigned int type = uncore_freerunning_msr_type(event->attr.config);
+
+	return box->pmu->type->freerunning[type].num_counters;
+}
+
+static inline int uncore_num_freerunning_types(struct intel_uncore_box *box,
+					       struct perf_event *event)
+{
+	return box->pmu->type->num_freerunning_types;
+}
+
+static inline bool check_valid_freerunning_event(struct intel_uncore_box *box,
+						 struct perf_event *event)
+{
+	unsigned int type = uncore_freerunning_msr_type(event->attr.config);
+	unsigned int idx = uncore_freerunning_msr_idx(event->attr.config);
+
+	return ((type < uncore_num_freerunning_types(box, event)) &&
+	       (idx < uncore_num_freerunning(box, event)));
+}
+
 static inline int uncore_num_counters(struct intel_uncore_box *box)
 {
 	return box->pmu->type->num_counters;
 }
 
+static inline bool is_freerunning_event(struct perf_event *event)
+{
+	u64 cfg = event->attr.config;
+
+	return (((cfg & UNCORE_FIXED_EVENT) == UNCORE_FIXED_EVENT) &&
+		(((cfg >> 8) & 0xff) >= UNCORE_FREERUNNING_MSR_START));
+}
+
 static inline void uncore_disable_box(struct intel_uncore_box *box)
 {
 	if (box->pmu->type->ops->disable_box)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH V2 4/4] perf/x86/intel/uncore: SKX support for IIO freerunning counter
  2017-10-19 16:55 [PATCH V2 1/4] perf/x86/intel/uncore: use same idx for clinet IMC uncore events kan.liang
  2017-10-19 16:55 ` [PATCH V2 2/4] perf/x86/intel/uncore: inline function to check the fixed counter event kan.liang
  2017-10-19 16:55 ` [PATCH V2 3/4] perf/x86/intel/uncore: add infrastructure for freerunning counters kan.liang
@ 2017-10-19 16:55 ` kan.liang
  2017-10-20 14:15   ` Thomas Gleixner
  2017-10-20 14:12 ` [PATCH V2 1/4] perf/x86/intel/uncore: use same idx for clinet IMC uncore events Thomas Gleixner
  3 siblings, 1 reply; 9+ messages in thread
From: kan.liang @ 2017-10-19 16:55 UTC (permalink / raw)
  To: tglx, peterz, mingo, linux-kernel; +Cc: acme, eranian, ak, Kan Liang

From: Kan Liang <Kan.liang@intel.com>

As of Skylake Server, there are a number of free-running counters in
each IIO Box that collect counts for per box IO clocks and per Port
Input/Output x BW/Utilization.

Freerunning counters cannot be written by SW. Counting will be suspended
only when the IIO Box is powered down.

The bit width of freerunning counter is 36-bit.

Signed-off-by: Kan Liang <Kan.liang@intel.com>
---

Changes since V1:
 - Split the patch for SKX support

 arch/x86/events/intel/uncore_snbep.c | 58 ++++++++++++++++++++++++++++++++++++
 1 file changed, 58 insertions(+)

diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
index db1fe37..5c8ba6b 100644
--- a/arch/x86/events/intel/uncore_snbep.c
+++ b/arch/x86/events/intel/uncore_snbep.c
@@ -3459,6 +3459,61 @@ static struct intel_uncore_ops skx_uncore_iio_ops = {
 	.read_counter		= uncore_msr_read_counter,
 };
 
+enum perf_uncore_iio_freerunning_msr_type_id {
+	SKX_IIO_MSR_IOCLK			= 0,
+	SKX_IIO_MSR_BW				= 1,
+	SKX_IIO_MSR_UTIL			= 2,
+
+	SKX_IIO_FREERUNNING_MSR_TYPE_MAX,
+};
+
+
+static struct freerunning_msr skx_iio_freerunning_msr[] = {
+	[SKX_IIO_MSR_IOCLK]	= { 0xa45, 0x20, 1, 36 },
+	[SKX_IIO_MSR_BW]	= { 0xb00, 0x10, 8, 36 },
+	[SKX_IIO_MSR_UTIL]	= { 0xb08, 0x10, 8, 36 },
+};
+
+static struct uncore_event_desc skx_uncore_iio_events[] = {
+	/* Free-Running IO CLOCKS Counter */
+	INTEL_UNCORE_EVENT_DESC(ioclk,			"event=0xff,umask=0x10"),
+	/* Free-Running IIO Bandwidth Counters */
+	INTEL_UNCORE_EVENT_DESC(bw_in_port0,		"event=0xff,umask=0x20"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port0.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port0.unit,	"MiB"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port1,		"event=0xff,umask=0x21"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port1.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port1.unit,	"MiB"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port2,		"event=0xff,umask=0x22"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port2.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port2.unit,	"MiB"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port3,		"event=0xff,umask=0x23"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port3.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_in_port3.unit,	"MiB"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port0,		"event=0xff,umask=0x24"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port0.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port0.unit,	"MiB"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port1,		"event=0xff,umask=0x25"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port1.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port1.unit,	"MiB"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port2,		"event=0xff,umask=0x26"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port2.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port2.unit,	"MiB"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port3,		"event=0xff,umask=0x27"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port3.scale,	"3.814697266e-6"),
+	INTEL_UNCORE_EVENT_DESC(bw_out_port3.unit,	"MiB"),
+	/* Free-running IIO Utilization Counters */
+	INTEL_UNCORE_EVENT_DESC(util_in_port0,		"event=0xff,umask=0x30"),
+	INTEL_UNCORE_EVENT_DESC(util_out_port0,		"event=0xff,umask=0x31"),
+	INTEL_UNCORE_EVENT_DESC(util_in_port1,		"event=0xff,umask=0x32"),
+	INTEL_UNCORE_EVENT_DESC(util_out_port1,		"event=0xff,umask=0x33"),
+	INTEL_UNCORE_EVENT_DESC(util_in_port2,		"event=0xff,umask=0x34"),
+	INTEL_UNCORE_EVENT_DESC(util_out_port2,		"event=0xff,umask=0x35"),
+	INTEL_UNCORE_EVENT_DESC(util_in_port3,		"event=0xff,umask=0x36"),
+	INTEL_UNCORE_EVENT_DESC(util_out_port3,		"event=0xff,umask=0x37"),
+	{ /* end: all zeroes */ },
+};
+
 static struct intel_uncore_type skx_uncore_iio = {
 	.name			= "iio",
 	.num_counters		= 4,
@@ -3470,8 +3525,11 @@ static struct intel_uncore_type skx_uncore_iio = {
 	.event_mask_ext		= SKX_IIO_PMON_RAW_EVENT_MASK_EXT,
 	.box_ctl		= SKX_IIO0_MSR_PMON_BOX_CTL,
 	.msr_offset		= SKX_IIO_MSR_OFFSET,
+	.num_freerunning_types	= SKX_IIO_FREERUNNING_MSR_TYPE_MAX,
+	.freerunning		= skx_iio_freerunning_msr,
 	.constraints		= skx_uncore_iio_constraints,
 	.ops			= &skx_uncore_iio_ops,
+	.event_descs		= skx_uncore_iio_events,
 	.format_group		= &skx_uncore_iio_format_group,
 };
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH V2 1/4] perf/x86/intel/uncore: use same idx for clinet IMC uncore events
  2017-10-19 16:55 [PATCH V2 1/4] perf/x86/intel/uncore: use same idx for clinet IMC uncore events kan.liang
                   ` (2 preceding siblings ...)
  2017-10-19 16:55 ` [PATCH V2 4/4] perf/x86/intel/uncore: SKX support for IIO freerunning counter kan.liang
@ 2017-10-20 14:12 ` Thomas Gleixner
  2017-10-20 15:04   ` Liang, Kan
  3 siblings, 1 reply; 9+ messages in thread
From: Thomas Gleixner @ 2017-10-20 14:12 UTC (permalink / raw)
  To: Kan Liang; +Cc: Peter Zijlstra, x86, LKML, acme, eranian, ak

On Thu, 19 Oct 2017, kan.liang@intel.com wrote:
> The clinet IMC uncore is the only one who claims two 'fixed counters'.

clinet? 

> To specially handle it, event->hw.idx >= UNCORE_PMC_IDX_FIXED is used to
> check fixed counters in the generic uncore_perf_event_update.
> It does not have problem in current code.

I disagree. While it has no functional problem, it's a obscure hack which
means it is a code quality problem.

> Because there are no counters whose idx is larger than fixed
> counters. However, it will have problem if new counter type is introduced
> in generic code. For example, freerunning counters.
> 
> Actually, the 'fixed counters' in the clinet IMC uncore is not
> traditional fixed counter. They are freerunning counters, which don't
> need the idx to indicate which counter is assigned. They also have same
> bits wide. So it's OK to let them use the same idx. event_base is good

s/wide/width/

> enough to select the proper freerunning counter.

So why are they named fixed counters in the first place? If they are not
fixed, but freerunning then please clean that up as well.

I pointed out to you that this is crap. So please don't try to justify this
crap. Just fix it up.

> There is no traditional fixed counter in clinet IMC uncore. Let them use
> the same idx as fixed event for clinet IMC uncore events.

I have no idea what's traditional about counters, but that's a nit pick.

> The following patch will remove the special codes in generic
> uncore_perf_event_update.

I told you more than once, that 'The ... patch', 'This patch' is not part
of a proper changelog.

See Documentation/process/submitting-patches.rst:

    Describe your changes in imperative mood, e.g. "make xyzzy do frotz"
    instead of "[This patch] makes xyzzy do frotz" or "[I] changed xyzzy
    to do frotz", as if you are giving orders to the codebase to change
    its behaviour.

along with the rest of the document.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH V2 2/4] perf/x86/intel/uncore: inline function to check the fixed counter event
  2017-10-19 16:55 ` [PATCH V2 2/4] perf/x86/intel/uncore: inline function to check the fixed counter event kan.liang
@ 2017-10-20 14:13   ` Thomas Gleixner
  0 siblings, 0 replies; 9+ messages in thread
From: Thomas Gleixner @ 2017-10-20 14:13 UTC (permalink / raw)
  To: Kan Liang; +Cc: Peter Zijlstra, x86, LKML, acme, eranian, ak

On Thu, 19 Oct 2017, kan.liang@intel.com wrote:
> From: Kan Liang <Kan.liang@intel.com>
> 
> Remove the special codes in generic uncore_perf_event_update.

What are special codes?

> Introduce inline function to check the fixed counter event.
>  
> +static inline bool uncore_pmc_fixed(int idx)
> +{
> +	return (idx == UNCORE_PMC_IDX_FIXED);

The parentheses are pointless. Please get rid of them.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH V2 3/4] perf/x86/intel/uncore: add infrastructure for freerunning counters
  2017-10-19 16:55 ` [PATCH V2 3/4] perf/x86/intel/uncore: add infrastructure for freerunning counters kan.liang
@ 2017-10-20 14:15   ` Thomas Gleixner
  0 siblings, 0 replies; 9+ messages in thread
From: Thomas Gleixner @ 2017-10-20 14:15 UTC (permalink / raw)
  To: Kan Liang; +Cc: Peter Zijlstra, x86, LKML, acme, eranian, ak

On Thu, 19 Oct 2017, kan.liang@intel.com wrote:
> From: Kan Liang <Kan.liang@intel.com>
> 
> There are a number of freerunning counters introduced for uncore.
> For example, Skylake Server has IIO freerunning counters to collect
> Input/Output x BW/Utilization.
> 
> The freerunning counter is similar as fixed counter, except it cannot
> be written by SW. It needs to be specially handled in generic code and
> not added in box->events list.

You are describing what you are doing, not why.

> Introduce a new idx to indicate the freerunning counter. Only one idx is

Please write out index and do not use arbitrary acronyms in changelogs.

> enough for all freerunning counters. Because event and freerunning
> counter are always 1:1 mapped. The freerunning counter is always

  mapped, the free running ....  

> available. It doesn't need extra idx to indicate the assigned counter.
> 
> The event code for freerunning event is shared with fixed event, which

Csn you please use consistent terminology. freerunning counter, freerunning
event ? Are you talking about two different things here?

> is 0xff. The umask of freerunning event starts from 0x10. The umask less
> than 0x10 is reserved for fixed event.
> 
> The Freerunning counters could have different MSR location and offset.

s/Freerunning/free running/

could have ? Either they have or not.

> Accordingly, they are divided into different types. Each type is limited
> to only have at most 16 events.
> So the umask of the first free running events type starts from 0x10. The
> umask of the second starts from 0x20. The rest can be done in the same
> manner.

Which rest and which manner?

Please spend more time in writing change logs. They are part of the patch
and an essential information for a reviewer.

> @@ -218,7 +218,9 @@ void uncore_perf_event_update(struct intel_uncore_box *box, struct perf_event *e
>  	u64 prev_count, new_count, delta;
>  	int shift;
>  
> -	if (uncore_pmc_fixed(event->hw.idx))
> +	if (uncore_pmc_freerunning(event->hw.idx))
> +		shift = 64 - uncore_freerunning_bits(box, event);
> +	else if (uncore_pmc_fixed(event->hw.idx))
>  		shift = 64 - uncore_fixed_ctr_bits(box);
>  	else
>  		shift = 64 - uncore_perf_ctr_bits(box);
> @@ -362,6 +364,10 @@ uncore_collect_events(struct intel_uncore_box *box, struct perf_event *leader,
>  		if (n >= max_count)
>  			return -EINVAL;
>  
> +		/* freerunning event is not tracked by box->events list */

First word in a sentence starts with an uppercase letter.

> +		if (uncore_pmc_freerunning(event->hw.idx))
> +			continue;
> +
>  		box->event_list[n] = event;
>  		n++;
>  	}
> @@ -454,10 +460,21 @@ static void uncore_pmu_event_start(struct perf_event *event, int flags)
>  	struct intel_uncore_box *box = uncore_event_to_box(event);
>  	int idx = event->hw.idx;
>  
> -	if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
> +	if (WARN_ON_ONCE(idx == -1 || idx >= UNCORE_PMC_IDX_MAX))
>  		return;
>  
> -	if (WARN_ON_ONCE(idx == -1 || idx >= UNCORE_PMC_IDX_MAX))
> +	/*
> +	 * Freerunning counters cannot be written by SW.
> +	 * Does not need to enable it or add the event to box->events list.
> +	 * Use current value as start point.

So what you want to say is:

   	/*
	 * Free running counters are read only and always active. Use the
	 * current counter value as start point.
	 */

Right?

> +	 */
> +	if (uncore_pmc_freerunning(event->hw.idx)) {
> +		local64_set(&event->hw.prev_count,
> +			    uncore_read_counter(box, event));
> +		return;
> +	}
> +
> +	if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
>  		return;
>  
>  	event->hw.state = 0;
> @@ -479,6 +496,15 @@ static void uncore_pmu_event_stop(struct perf_event *event, int flags)
>  	struct intel_uncore_box *box = uncore_event_to_box(event);
>  	struct hw_perf_event *hwc = &event->hw;
>  
> +	/*
> +	 * Does not need to disable freerunning counters.

Does not need? You _cannot_ disable them.

> +	 * Read current value as end.
> +	 */
> +	if (uncore_pmc_freerunning(hwc->idx)) {
> +		uncore_perf_event_update(box, event);
> +		return;


> +	/*
> +	 * Freerunning counters cannot be written by SW.

See above.

> +	 * No need to force event->hw.idx = -1 to reassign the counter.
> +	 */
> +	if (uncore_pmc_freerunning(event->hw.idx))
> +		return;
> +
>  	for (i = 0; i < box->n_events; i++) {
>  		if (event == box->event_list[i]) {
>  			uncore_put_event_constraint(box, event);
> @@ -603,6 +647,13 @@ static int uncore_validate_group(struct intel_uncore_pmu *pmu,
>  	struct intel_uncore_box *fake_box;
>  	int ret = -EINVAL, n;
>  
> +	/*
> +	 * Event and freerunning counters are 1:1 mapped,
> +	 * which is always available.

Nice informative one!

> +	 */
> +	if (uncore_pmc_freerunning(event->hw.idx))
> +		return 0;
> +
> +/*
> + * Freerunning counter is similar as fixed counter, except it cannot be
> + * written by SW.
> + *
> + * Freerunning MSR events have the same event code 0xff as fixed events.
> + * The Freerunning events umask starts from 0x10.
> + * The umask which is less than 0x10 is reserved for fixed events.
> + *
> + * The Freerunning events are divided into different types according to
> + * MSR location, bit width or definition. Each type is limited to only have
> + * at most 16 events.
> + * So the umask of first type starts from 0x10, the second starts from 0x20,
> + * the rest can be done in the same manner.
> + */
> +#define UNCORE_FREERUNNING_MSR_START		0x10

New lines are there for a reason. Squeezing that #define between the
comment and the inline function, which not even uses that define, does not
make the code more readable.

> +static inline unsigned int uncore_freerunning_msr_idx(u64 config)
> +{
> +	return ((config >> 8) & 0xf);
> +}
> +
> +static inline unsigned int uncore_freerunning_msr_type(u64 config)
> +{
> +	return ((((config >> 8) - UNCORE_FREERUNNING_MSR_START) >> 4) & 0xf);
> +}

Other than that this looks sane.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH V2 4/4] perf/x86/intel/uncore: SKX support for IIO freerunning counter
  2017-10-19 16:55 ` [PATCH V2 4/4] perf/x86/intel/uncore: SKX support for IIO freerunning counter kan.liang
@ 2017-10-20 14:15   ` Thomas Gleixner
  0 siblings, 0 replies; 9+ messages in thread
From: Thomas Gleixner @ 2017-10-20 14:15 UTC (permalink / raw)
  To: Kan Liang; +Cc: Peter Zijlstra, x86, LKML, acme, eranian, ak

On Thu, 19 Oct 2017, kan.liang@intel.com wrote:

> As of Skylake Server, there are a number of free-running counters in
> each IIO Box that collect counts for per box IO clocks and per Port
> Input/Output x BW/Utilization.
> 
> Freerunning counters cannot be written by SW. Counting will be suspended
> only when the IIO Box is powered down.

Free running counters are read only and always active when the IIO box is
powered up.

> 
> The bit width of freerunning counter is 36-bit.

Please use 'free running' consistently all over the place.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [PATCH V2 1/4] perf/x86/intel/uncore: use same idx for clinet IMC uncore events
  2017-10-20 14:12 ` [PATCH V2 1/4] perf/x86/intel/uncore: use same idx for clinet IMC uncore events Thomas Gleixner
@ 2017-10-20 15:04   ` Liang, Kan
  0 siblings, 0 replies; 9+ messages in thread
From: Liang, Kan @ 2017-10-20 15:04 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: Peter Zijlstra, x86, LKML, acme, eranian, ak

> > To specially handle it, event->hw.idx >= UNCORE_PMC_IDX_FIXED is used
> to
> > check fixed counters in the generic uncore_perf_event_update.
> > It does not have problem in current code.
> 
> I disagree. While it has no functional problem, it's a obscure hack which
> means it is a code quality problem.
> 
> > Because there are no counters whose idx is larger than fixed
> > counters. However, it will have problem if new counter type is introduced
> > in generic code. For example, freerunning counters.
> >
> > Actually, the 'fixed counters' in the clinet IMC uncore is not
> > traditional fixed counter. They are freerunning counters, which don't
> > need the idx to indicate which counter is assigned. They also have same
> > bits wide. So it's OK to let them use the same idx. event_base is good
> 
> s/wide/width/
> 
> > enough to select the proper freerunning counter.
> 
> So why are they named fixed counters in the first place? If they are not
> fixed, but freerunning then please clean that up as well.
>

Sure, I will clean it up and make it part of the new free running infrastructure.
I will also modify all changelog according to your comments.

Thank you for the detailed review and your patience.
Kan

 
> I pointed out to you that this is crap. So please don't try to justify this
> crap. Just fix it up.
> 
> > There is no traditional fixed counter in clinet IMC uncore. Let them use
> > the same idx as fixed event for clinet IMC uncore events.
> 
> I have no idea what's traditional about counters, but that's a nit pick.
> 
> > The following patch will remove the special codes in generic
> > uncore_perf_event_update.
> 
> I told you more than once, that 'The ... patch', 'This patch' is not part
> of a proper changelog.
> 
> See Documentation/process/submitting-patches.rst:
> 
>     Describe your changes in imperative mood, e.g. "make xyzzy do frotz"
>     instead of "[This patch] makes xyzzy do frotz" or "[I] changed xyzzy
>     to do frotz", as if you are giving orders to the codebase to change
>     its behaviour.
> 
> along with the rest of the document.
> 
> Thanks,
> 
> 	tglx

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2017-10-20 15:04 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-19 16:55 [PATCH V2 1/4] perf/x86/intel/uncore: use same idx for clinet IMC uncore events kan.liang
2017-10-19 16:55 ` [PATCH V2 2/4] perf/x86/intel/uncore: inline function to check the fixed counter event kan.liang
2017-10-20 14:13   ` Thomas Gleixner
2017-10-19 16:55 ` [PATCH V2 3/4] perf/x86/intel/uncore: add infrastructure for freerunning counters kan.liang
2017-10-20 14:15   ` Thomas Gleixner
2017-10-19 16:55 ` [PATCH V2 4/4] perf/x86/intel/uncore: SKX support for IIO freerunning counter kan.liang
2017-10-20 14:15   ` Thomas Gleixner
2017-10-20 14:12 ` [PATCH V2 1/4] perf/x86/intel/uncore: use same idx for clinet IMC uncore events Thomas Gleixner
2017-10-20 15:04   ` Liang, Kan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.