[V4,16/23] perf/x86/intel: Set correct weight for topdown subevent counters
diff mbox series

Message ID 20190326160901.4887-17-kan.liang@linux.intel.com
State New
Headers show
Series
  • perf: Add Icelake support
Related show

Commit Message

Liang, Kan March 26, 2019, 4:08 p.m. UTC
From: Andi Kleen <ak@linux.intel.com>

The top down sub event counters are mapped to a fixed counter,
but should have the normal weight for the scheduler.
So special case this.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---

No changes since V3.

 arch/x86/events/intel/core.c | 9 +++++++++
 1 file changed, 9 insertions(+)

Patch
diff mbox series

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 6a8a221dc188..31e4e283e7c5 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -5081,6 +5081,15 @@  __init int intel_pmu_init(void)
 		 * counter, so do not extend mask to generic counters
 		 */
 		for_each_event_constraint(c, x86_pmu.event_constraints) {
+			/*
+			 * Don't limit the event mask for topdown sub event
+			 * counters.
+			 */
+			if (x86_pmu.num_counters_fixed >= 3 &&
+			    c->idxmsk64 & INTEL_PMC_MSK_ANY_SLOTS) {
+				c->weight = hweight64(c->idxmsk64);
+				continue;
+			}
 			if (c->cmask == FIXED_EVENT_FLAGS
 			    && c->idxmsk64 != INTEL_PMC_MSK_FIXED_REF_CYCLES) {
 				c->idxmsk64 |= (1ULL << x86_pmu.num_counters) - 1;