linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: kan.liang@linux.intel.com
To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org
Cc: acme@kernel.org, tglx@linutronix.de, bp@alien8.de,
	namhyung@kernel.org, jolsa@redhat.com, ak@linux.intel.com,
	yao.jin@linux.intel.com, alexander.shishkin@linux.intel.com,
	adrian.hunter@intel.com, ricardo.neri-calderon@linux.intel.com,
	Kan Liang <kan.liang@linux.intel.com>
Subject: [PATCH V5 12/25] perf/x86/intel: Factor out intel_pmu_check_event_constraints
Date: Mon,  5 Apr 2021 08:10:54 -0700	[thread overview]
Message-ID: <1617635467-181510-13-git-send-email-kan.liang@linux.intel.com> (raw)
In-Reply-To: <1617635467-181510-1-git-send-email-kan.liang@linux.intel.com>

From: Kan Liang <kan.liang@linux.intel.com>

Each Hybrid PMU has to check and update its own event constraints before
registration.

The intel_pmu_check_event_constraints will be reused later to check
the event constraints of each hybrid PMU.

Reviewed-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
 arch/x86/events/intel/core.c | 82 +++++++++++++++++++++++++-------------------
 1 file changed, 47 insertions(+), 35 deletions(-)

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 9394646..53a2e2e 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -5090,6 +5090,49 @@ static void intel_pmu_check_num_counters(int *num_counters,
 	*intel_ctrl |= fixed_mask << INTEL_PMC_IDX_FIXED;
 }
 
+static void intel_pmu_check_event_constraints(struct event_constraint *event_constraints,
+					      int num_counters,
+					      int num_counters_fixed,
+					      u64 intel_ctrl)
+{
+	struct event_constraint *c;
+
+	if (!event_constraints)
+		return;
+
+	/*
+	 * event on fixed counter2 (REF_CYCLES) only works on this
+	 * counter, so do not extend mask to generic counters
+	 */
+	for_each_event_constraint(c, event_constraints) {
+		/*
+		 * Don't extend the topdown slots and metrics
+		 * events to the generic counters.
+		 */
+		if (c->idxmsk64 & INTEL_PMC_MSK_TOPDOWN) {
+			/*
+			 * Disable topdown slots and metrics events,
+			 * if slots event is not in CPUID.
+			 */
+			if (!(INTEL_PMC_MSK_FIXED_SLOTS & intel_ctrl))
+				c->idxmsk64 = 0;
+			c->weight = hweight64(c->idxmsk64);
+			continue;
+		}
+
+		if (c->cmask == FIXED_EVENT_FLAGS) {
+			/* Disabled fixed counters which are not in CPUID */
+			c->idxmsk64 &= intel_ctrl;
+
+			if (c->idxmsk64 != INTEL_PMC_MSK_FIXED_REF_CYCLES)
+				c->idxmsk64 |= (1ULL << num_counters) - 1;
+		}
+		c->idxmsk64 &=
+			~(~0ULL << (INTEL_PMC_IDX_FIXED + num_counters_fixed));
+		c->weight = hweight64(c->idxmsk64);
+	}
+}
+
 __init int intel_pmu_init(void)
 {
 	struct attribute **extra_skl_attr = &empty_attrs;
@@ -5100,7 +5143,6 @@ __init int intel_pmu_init(void)
 	union cpuid10_edx edx;
 	union cpuid10_eax eax;
 	union cpuid10_ebx ebx;
-	struct event_constraint *c;
 	unsigned int fixed_mask;
 	struct extra_reg *er;
 	bool pmem = false;
@@ -5738,40 +5780,10 @@ __init int intel_pmu_init(void)
 	if (x86_pmu.intel_cap.anythread_deprecated)
 		x86_pmu.format_attrs = intel_arch_formats_attr;
 
-	if (x86_pmu.event_constraints) {
-		/*
-		 * event on fixed counter2 (REF_CYCLES) only works on this
-		 * counter, so do not extend mask to generic counters
-		 */
-		for_each_event_constraint(c, x86_pmu.event_constraints) {
-			/*
-			 * Don't extend the topdown slots and metrics
-			 * events to the generic counters.
-			 */
-			if (c->idxmsk64 & INTEL_PMC_MSK_TOPDOWN) {
-				/*
-				 * Disable topdown slots and metrics events,
-				 * if slots event is not in CPUID.
-				 */
-				if (!(INTEL_PMC_MSK_FIXED_SLOTS & x86_pmu.intel_ctrl))
-					c->idxmsk64 = 0;
-				c->weight = hweight64(c->idxmsk64);
-				continue;
-			}
-
-			if (c->cmask == FIXED_EVENT_FLAGS) {
-				/* Disabled fixed counters which are not in CPUID */
-				c->idxmsk64 &= x86_pmu.intel_ctrl;
-
-				if (c->idxmsk64 != INTEL_PMC_MSK_FIXED_REF_CYCLES)
-					c->idxmsk64 |= (1ULL << x86_pmu.num_counters) - 1;
-			}
-			c->idxmsk64 &=
-				~(~0ULL << (INTEL_PMC_IDX_FIXED + x86_pmu.num_counters_fixed));
-			c->weight = hweight64(c->idxmsk64);
-		}
-	}
-
+	intel_pmu_check_event_constraints(x86_pmu.event_constraints,
+					  x86_pmu.num_counters,
+					  x86_pmu.num_counters_fixed,
+					  x86_pmu.intel_ctrl);
 	/*
 	 * Access LBR MSR may cause #GP under certain circumstances.
 	 * E.g. KVM doesn't support LBR MSR
-- 
2.7.4


  parent reply	other threads:[~2021-04-05 15:18 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-05 15:10 [PATCH V5 00/25] Add Alder Lake support for perf (kernel) kan.liang
2021-04-05 15:10 ` [PATCH V5 01/25] x86/cpufeatures: Enumerate Intel Hybrid Technology feature bit kan.liang
2021-04-05 15:10 ` [PATCH V5 02/25] x86/cpu: Add helper function to get the type of the current hybrid CPU kan.liang
2021-04-05 15:10 ` [PATCH V5 03/25] perf/x86: Track pmu in per-CPU cpu_hw_events kan.liang
2021-04-05 15:10 ` [PATCH V5 04/25] perf/x86/intel: Hybrid PMU support for perf capabilities kan.liang
2021-04-08 13:40   ` Peter Zijlstra
2021-04-08 14:19     ` Peter Zijlstra
2021-04-08 18:24     ` Liang, Kan
2021-04-08 17:00   ` Peter Zijlstra
2021-04-08 17:40     ` Liang, Kan
2021-04-05 15:10 ` [PATCH V5 05/25] perf/x86: Hybrid PMU support for intel_ctrl kan.liang
2021-04-05 15:10 ` [PATCH V5 06/25] perf/x86: Hybrid PMU support for counters kan.liang
2021-04-05 15:10 ` [PATCH V5 07/25] perf/x86: Hybrid PMU support for unconstrained kan.liang
2021-04-05 15:10 ` [PATCH V5 08/25] perf/x86: Hybrid PMU support for hardware cache event kan.liang
2021-04-08 14:22   ` Peter Zijlstra
2021-04-05 15:10 ` [PATCH V5 09/25] perf/x86: Hybrid PMU support for event constraints kan.liang
2021-04-05 15:10 ` [PATCH V5 10/25] perf/x86: Hybrid PMU support for extra_regs kan.liang
2021-04-05 15:10 ` [PATCH V5 11/25] perf/x86/intel: Factor out intel_pmu_check_num_counters kan.liang
2021-04-05 15:10 ` kan.liang [this message]
2021-04-05 15:10 ` [PATCH V5 13/25] perf/x86/intel: Factor out intel_pmu_check_extra_regs kan.liang
2021-04-05 15:10 ` [PATCH V5 14/25] perf/x86: Remove temporary pmu assignment in event_init kan.liang
2021-04-05 15:10 ` [PATCH V5 15/25] perf/x86: Factor out x86_pmu_show_pmu_cap kan.liang
2021-04-05 15:10 ` [PATCH V5 16/25] perf/x86: Register hybrid PMUs kan.liang
2021-04-09  6:58   ` Peter Zijlstra
2021-04-09 13:50     ` Liang, Kan
2021-04-09 15:45       ` Peter Zijlstra
2021-04-09 15:47         ` Liang, Kan
2021-04-05 15:10 ` [PATCH V5 17/25] perf/x86: Add structures for the attributes of Hybrid PMUs kan.liang
2021-04-05 15:11 ` [PATCH V5 18/25] perf/x86/intel: Add attr_update for " kan.liang
2021-04-05 15:11 ` [PATCH V5 19/25] perf/x86: Support filter_match callback kan.liang
2021-04-05 15:11 ` [PATCH V5 20/25] perf/x86/intel: Add Alder Lake Hybrid support kan.liang
2021-04-05 15:11 ` [PATCH V5 21/25] perf: Introduce PERF_TYPE_HARDWARE_PMU and PERF_TYPE_HW_CACHE_PMU kan.liang
2021-04-09  9:21   ` Peter Zijlstra
2021-04-09 15:24     ` Liang, Kan
2021-04-09 15:51       ` Peter Zijlstra
2021-04-05 15:11 ` [PATCH V5 22/25] perf/x86/intel/uncore: Add Alder Lake support kan.liang
2021-04-05 15:11 ` [PATCH V5 23/25] perf/x86/msr: Add Alder Lake CPU support kan.liang
2021-04-09  9:24   ` Peter Zijlstra
2021-04-09 14:35     ` Liang, Kan
2021-04-05 15:11 ` [PATCH V5 24/25] perf/x86/cstate: " kan.liang
2021-04-05 15:11 ` [PATCH V5 25/25] perf/x86/rapl: Add support for Intel Alder Lake kan.liang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1617635467-181510-13-git-send-email-kan.liang@linux.intel.com \
    --to=kan.liang@linux.intel.com \
    --cc=acme@kernel.org \
    --cc=adrian.hunter@intel.com \
    --cc=ak@linux.intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=bp@alien8.de \
    --cc=jolsa@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    --cc=ricardo.neri-calderon@linux.intel.com \
    --cc=tglx@linutronix.de \
    --cc=yao.jin@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).