All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V2 0/7] powerpc/perf: Fixes for power10 PMU
@ 2020-11-26 16:54 Athira Rajeev
  2020-11-26 16:54 ` [PATCH V2 1/7] powerpc/perf: Fix to update radix_scope_qual in power10 Athira Rajeev
                   ` (7 more replies)
  0 siblings, 8 replies; 9+ messages in thread
From: Athira Rajeev @ 2020-11-26 16:54 UTC (permalink / raw)
  To: mpe; +Cc: maddy, linuxppc-dev

Patchset contains PMU fixes for power10.

This patchset contains 7 patches.
Patch1 includes fix to update event code with radix_scope_qual
bit in power10.
Patch2 and Patch3 updates the event group constraints for L2/L3
and threshold events in power10.
Patch4, patch5 and patch6 includes the event code changes for
l2/l3 events and some of the generic events.
Patch7 adds fixes for PMCCEXT bit in power10.

Changelog:
Changes from v1 -> v2
- Addressed Michael Ellerman's comments in the patchset.
  Split patch 2 to address l2l3 and threshold events
  group constraints fixes separately.
  Split Patch 3 also to address event code updates
  separately for generic and cache events.
  Fixed commit messages and also PMCCEXT bit setting
  during event enable.

Athira Rajeev (7):
  powerpc/perf: Fix to update radix_scope_qual in power10
  powerpc/perf: Update the PMU group constraints for l2l3 events in
    power10
  powerpc/perf: Fix the PMU group constraints for threshold events in
    power10
  powerpc/perf: Add generic and cache event list for power10 DD1
  powerpc/perf: Fix to update generic event codes for power10
  powerpc/perf: Fix to update cache events with l2l3 events in power10
  powerpc/perf: MMCR0 control for PMU registers under PMCC=00

 arch/powerpc/include/asm/reg.h          |   1 +
 arch/powerpc/kernel/cpu_setup_power.c   |   1 +
 arch/powerpc/kernel/dt_cpu_ftrs.c       |   1 +
 arch/powerpc/perf/core-book3s.c         |   4 +
 arch/powerpc/perf/isa207-common.c       |  35 ++++++-
 arch/powerpc/perf/isa207-common.h       |  16 ++-
 arch/powerpc/perf/power10-events-list.h |   9 ++
 arch/powerpc/perf/power10-pmu.c         | 178 ++++++++++++++++++++++++++++++--
 8 files changed, 231 insertions(+), 14 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH V2 1/7] powerpc/perf: Fix to update radix_scope_qual in power10
  2020-11-26 16:54 [PATCH V2 0/7] powerpc/perf: Fixes for power10 PMU Athira Rajeev
@ 2020-11-26 16:54 ` Athira Rajeev
  2020-11-26 16:54 ` [PATCH V2 2/7] powerpc/perf: Update the PMU group constraints for l2l3 events " Athira Rajeev
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Athira Rajeev @ 2020-11-26 16:54 UTC (permalink / raw)
  To: mpe; +Cc: maddy, linuxppc-dev

power10 uses bit 9 of the raw event code as RADIX_SCOPE_QUAL.
This bit is used for enabling the radix process events.
Patch fixes the PMU counter support functions to program bit
18 of MMCR1 ( Monitor Mode Control Register1 ) with the
RADIX_SCOPE_QUAL bit value. Since this field is not per-pmc,
add this to PMU group constraints to make sure events in a
group will have same bit value for this field. Use bit 21 as
constraint bit field for radix_scope_qual. Patch also updates
the power10 raw event encoding layout information, format field
and constraints bit layout to include the radix_scope_qual bit.

Fixes: a64e697cef23 ("powerpc/perf: power10 Performance Monitoring support")
Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
---
 arch/powerpc/perf/isa207-common.c | 12 ++++++++++++
 arch/powerpc/perf/isa207-common.h | 13 ++++++++++---
 arch/powerpc/perf/power10-pmu.c   | 11 +++++++----
 3 files changed, 29 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c
index 2848904..f57f54f 100644
--- a/arch/powerpc/perf/isa207-common.c
+++ b/arch/powerpc/perf/isa207-common.c
@@ -339,6 +339,11 @@ int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp)
 		value |= CNST_L1_QUAL_VAL(cache);
 	}
 
+	if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+		mask |= CNST_RADIX_SCOPE_GROUP_MASK;
+		value |= CNST_RADIX_SCOPE_GROUP_VAL(event >> p10_EVENT_RADIX_SCOPE_QUAL_SHIFT);
+	}
+
 	if (is_event_marked(event)) {
 		mask  |= CNST_SAMPLE_MASK;
 		value |= CNST_SAMPLE_VAL(event >> EVENT_SAMPLE_SHIFT);
@@ -456,6 +461,13 @@ int isa207_compute_mmcr(u64 event[], int n_ev,
 			}
 		}
 
+		/* Set RADIX_SCOPE_QUAL bit */
+		if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+			val = (event[i] >> p10_EVENT_RADIX_SCOPE_QUAL_SHIFT) &
+				p10_EVENT_RADIX_SCOPE_QUAL_MASK;
+			mmcr1 |= val << p10_MMCR1_RADIX_SCOPE_QUAL_SHIFT;
+		}
+
 		if (is_event_marked(event[i])) {
 			mmcra |= MMCRA_SAMPLE_ENABLE;
 
diff --git a/arch/powerpc/perf/isa207-common.h b/arch/powerpc/perf/isa207-common.h
index 7025de5..dc9c3d2 100644
--- a/arch/powerpc/perf/isa207-common.h
+++ b/arch/powerpc/perf/isa207-common.h
@@ -101,6 +101,9 @@
 #define p10_EVENT_CACHE_SEL_MASK	0x3ull
 #define p10_EVENT_MMCR3_MASK		0x7fffull
 #define p10_EVENT_MMCR3_SHIFT		45
+#define p10_EVENT_RADIX_SCOPE_QUAL_SHIFT	9
+#define p10_EVENT_RADIX_SCOPE_QUAL_MASK	0x1
+#define p10_MMCR1_RADIX_SCOPE_QUAL_SHIFT	45
 
 #define p10_EVENT_VALID_MASK		\
 	((p10_SDAR_MODE_MASK   << p10_SDAR_MODE_SHIFT		|	\
@@ -112,6 +115,7 @@
 	(p9_EVENT_COMBINE_MASK << p9_EVENT_COMBINE_SHIFT)	|	\
 	(p10_EVENT_MMCR3_MASK  << p10_EVENT_MMCR3_SHIFT)	|	\
 	(EVENT_MARKED_MASK     << EVENT_MARKED_SHIFT)		|	\
+	(p10_EVENT_RADIX_SCOPE_QUAL_MASK << p10_EVENT_RADIX_SCOPE_QUAL_SHIFT)	|	\
 	 EVENT_LINUX_MASK					|	\
 	EVENT_PSEL_MASK))
 /*
@@ -125,9 +129,9 @@
  *
  *        28        24        20        16        12         8         4         0
  * | - - - - | - - - - | - - - - | - - - - | - - - - | - - - - | - - - - | - - - - |
- *               [ ] |   [ ]   [  sample ]   [     ]   [6] [5]   [4] [3]   [2] [1]
- *                |  |    |                     |
- *      BHRB IFM -*  |    |                     |      Count of events for each PMC.
+ *               [ ] |   [ ] |  [  sample ]   [     ]   [6] [5]   [4] [3]   [2] [1]
+ *                |  |    |  |                  |
+ *      BHRB IFM -*  |    |  |*radix_scope      |      Count of events for each PMC.
  *              EBB -*    |                     |        p1, p2, p3, p4, p5, p6.
  *      L1 I/D qualifier -*                     |
  *                     nc - number of counters -*
@@ -165,6 +169,9 @@
 #define CNST_L2L3_GROUP_VAL(v)	(((v) & 0x1full) << 55)
 #define CNST_L2L3_GROUP_MASK	CNST_L2L3_GROUP_VAL(0x1f)
 
+#define CNST_RADIX_SCOPE_GROUP_VAL(v)	(((v) & 0x1ull) << 21)
+#define CNST_RADIX_SCOPE_GROUP_MASK	CNST_RADIX_SCOPE_GROUP_VAL(1)
+
 /*
  * For NC we are counting up to 4 events. This requires three bits, and we need
  * the fifth event to overflow and set the 4th bit. To achieve that we bias the
diff --git a/arch/powerpc/perf/power10-pmu.c b/arch/powerpc/perf/power10-pmu.c
index a01e87f..88c5430 100644
--- a/arch/powerpc/perf/power10-pmu.c
+++ b/arch/powerpc/perf/power10-pmu.c
@@ -23,10 +23,10 @@
  *
  *        28        24        20        16        12         8         4         0
  * | - - - - | - - - - | - - - - | - - - - | - - - - | - - - - | - - - - | - - - - |
- *   [   ] [  sample ]   [ ] [ ]   [ pmc ]   [unit ]   [ ]   m   [    pmcxsel    ]
- *     |        |        |    |                        |     |
- *     |        |        |    |                        |     *- mark
- *     |        |        |    *- L1/L2/L3 cache_sel    |
+ *   [   ] [  sample ]   [ ] [ ]   [ pmc ]   [unit ]   [ ] |  m   [    pmcxsel    ]
+ *     |        |        |    |                        |   |  |
+ *     |        |        |    |                        |   |  *- mark
+ *     |        |        |    *- L1/L2/L3 cache_sel    |   |*-radix_scope_qual
  *     |        |        sdar_mode                     |
  *     |        *- sampling mode for marked events     *- combine
  *     |
@@ -59,6 +59,7 @@
  *
  * MMCR1[16] = cache_sel[0]
  * MMCR1[17] = cache_sel[1]
+ * MMCR1[18] = radix_scope_qual
  *
  * if mark:
  *	MMCRA[63]    = 1		(SAMPLE_ENABLE)
@@ -175,6 +176,7 @@ static int power10_get_alternatives(u64 event, unsigned int flags, u64 alt[])
 PMU_FORMAT_ATTR(invert_bit,     "config:47");
 PMU_FORMAT_ATTR(src_mask,       "config:48-53");
 PMU_FORMAT_ATTR(src_match,      "config:54-59");
+PMU_FORMAT_ATTR(radix_scope,	"config:9");
 
 static struct attribute *power10_pmu_format_attr[] = {
 	&format_attr_event.attr,
@@ -194,6 +196,7 @@ static int power10_get_alternatives(u64 event, unsigned int flags, u64 alt[])
 	&format_attr_invert_bit.attr,
 	&format_attr_src_mask.attr,
 	&format_attr_src_match.attr,
+	&format_attr_radix_scope.attr,
 	NULL,
 };
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH V2 2/7] powerpc/perf: Update the PMU group constraints for l2l3 events in power10
  2020-11-26 16:54 [PATCH V2 0/7] powerpc/perf: Fixes for power10 PMU Athira Rajeev
  2020-11-26 16:54 ` [PATCH V2 1/7] powerpc/perf: Fix to update radix_scope_qual in power10 Athira Rajeev
@ 2020-11-26 16:54 ` Athira Rajeev
  2020-11-26 16:54 ` [PATCH V2 3/7] powerpc/perf: Fix the PMU group constraints for threshold " Athira Rajeev
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Athira Rajeev @ 2020-11-26 16:54 UTC (permalink / raw)
  To: mpe; +Cc: maddy, linuxppc-dev

In Power9, L2/L3 bus events are always available as a
"bank" of 4 events. To obtain the counts for any of the
l2/l3 bus events in a given bank, the user will have to
program PMC4 with corresponding l2/l3 bus event for that
bank.

Commit 59029136d750 ("powerpc/perf: Add constraints for power9 l2/l3 bus events")
enforced this rule in Power9. But this is not valid for
Power10, since in Power10 Monitor Mode Control Register2
(MMCR2) has bits to configure l2/l3 event bits. Hence remove
this PMC4 constraint check from power10.

Since the l2/l3 bits in MMCR2 are not per-pmc, patch handles
group constrints checks for l2/l3 bits in MMCR2.

Fixes: a64e697cef23 ("powerpc/perf: power10 Performance Monitoring support")
Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
---
 arch/powerpc/perf/isa207-common.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c
index f57f54f..38ed450c 100644
--- a/arch/powerpc/perf/isa207-common.c
+++ b/arch/powerpc/perf/isa207-common.c
@@ -311,9 +311,11 @@ int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp)
 	}
 
 	if (unit >= 6 && unit <= 9) {
-		if (cpu_has_feature(CPU_FTR_ARCH_31) && (unit == 6)) {
-			mask |= CNST_L2L3_GROUP_MASK;
-			value |= CNST_L2L3_GROUP_VAL(event >> p10_L2L3_EVENT_SHIFT);
+		if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+			if (unit == 6) {
+				mask |= CNST_L2L3_GROUP_MASK;
+				value |= CNST_L2L3_GROUP_VAL(event >> p10_L2L3_EVENT_SHIFT);
+			}
 		} else if (cpu_has_feature(CPU_FTR_ARCH_300)) {
 			mask  |= CNST_CACHE_GROUP_MASK;
 			value |= CNST_CACHE_GROUP_VAL(event & 0xff);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH V2 3/7] powerpc/perf: Fix the PMU group constraints for threshold events in power10
  2020-11-26 16:54 [PATCH V2 0/7] powerpc/perf: Fixes for power10 PMU Athira Rajeev
  2020-11-26 16:54 ` [PATCH V2 1/7] powerpc/perf: Fix to update radix_scope_qual in power10 Athira Rajeev
  2020-11-26 16:54 ` [PATCH V2 2/7] powerpc/perf: Update the PMU group constraints for l2l3 events " Athira Rajeev
@ 2020-11-26 16:54 ` Athira Rajeev
  2020-11-26 16:54 ` [PATCH V2 4/7] powerpc/perf: Add generic and cache event list for power10 DD1 Athira Rajeev
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Athira Rajeev @ 2020-11-26 16:54 UTC (permalink / raw)
  To: mpe; +Cc: maddy, linuxppc-dev

The PMU group constraints mask for threshold events covers
all thresholding bits which includes threshold control value
(start/stop), select value as well as thresh_cmp value (MMCRA[9:18].
In power9, thresh_cmp bits were part of the event code. But in case
of power10, thresh_cmp bits are not part of event code due to
inclusion of MMCR3 bits. Hence thresh_cmp is not valid for
group constraints for power10.

Fix the PMU group constraints checking for threshold events in
power10 by using constraint mask and value for only threshold control
and select bits.

Fixes: a64e697cef23 ("powerpc/perf: power10 Performance Monitoring support")
Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
---
 arch/powerpc/perf/isa207-common.c | 7 ++++++-
 arch/powerpc/perf/isa207-common.h | 3 +++
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c
index 38ed450c..0f4983e 100644
--- a/arch/powerpc/perf/isa207-common.c
+++ b/arch/powerpc/perf/isa207-common.c
@@ -351,7 +351,12 @@ int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp)
 		value |= CNST_SAMPLE_VAL(event >> EVENT_SAMPLE_SHIFT);
 	}
 
-	if (cpu_has_feature(CPU_FTR_ARCH_300))  {
+	if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+		if (event_is_threshold(event)) {
+			mask  |= CNST_THRESH_CTL_SEL_MASK;
+			value |= CNST_THRESH_CTL_SEL_VAL(event >> EVENT_THRESH_SHIFT);
+		}
+	} else if (cpu_has_feature(CPU_FTR_ARCH_300))  {
 		if (event_is_threshold(event) && is_thresh_cmp_valid(event)) {
 			mask  |= CNST_THRESH_MASK;
 			value |= CNST_THRESH_VAL(event >> EVENT_THRESH_SHIFT);
diff --git a/arch/powerpc/perf/isa207-common.h b/arch/powerpc/perf/isa207-common.h
index dc9c3d2..4208764 100644
--- a/arch/powerpc/perf/isa207-common.h
+++ b/arch/powerpc/perf/isa207-common.h
@@ -149,6 +149,9 @@
 #define CNST_THRESH_VAL(v)	(((v) & EVENT_THRESH_MASK) << 32)
 #define CNST_THRESH_MASK	CNST_THRESH_VAL(EVENT_THRESH_MASK)
 
+#define CNST_THRESH_CTL_SEL_VAL(v)	(((v) & 0x7ffull) << 32)
+#define CNST_THRESH_CTL_SEL_MASK	CNST_THRESH_CTL_SEL_VAL(0x7ff)
+
 #define CNST_EBB_VAL(v)		(((v) & EVENT_EBB_MASK) << 24)
 #define CNST_EBB_MASK		CNST_EBB_VAL(EVENT_EBB_MASK)
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH V2 4/7] powerpc/perf: Add generic and cache event list for power10 DD1
  2020-11-26 16:54 [PATCH V2 0/7] powerpc/perf: Fixes for power10 PMU Athira Rajeev
                   ` (2 preceding siblings ...)
  2020-11-26 16:54 ` [PATCH V2 3/7] powerpc/perf: Fix the PMU group constraints for threshold " Athira Rajeev
@ 2020-11-26 16:54 ` Athira Rajeev
  2020-11-26 16:54 ` [PATCH V2 5/7] powerpc/perf: Fix to update generic event codes for power10 Athira Rajeev
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Athira Rajeev @ 2020-11-26 16:54 UTC (permalink / raw)
  To: mpe; +Cc: maddy, linuxppc-dev

There are event code updates for some of the generic events
and cache events for power10. Inorder to maintain the current
event codes work with DD1 also, create a new array of generic_events,
cache_events and pmu_attr_groups with suffix _dd1, example,
power10_events_attr_dd1. So that further updates to event codes
can be made in the original list, ie, power10_events_attr. Update the
power10 pmu init code to pick the dd1 list while registering
the power PMU, based on the pvr (Processor Version Register) value.

Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
---
 arch/powerpc/perf/power10-pmu.c | 152 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 152 insertions(+)

diff --git a/arch/powerpc/perf/power10-pmu.c b/arch/powerpc/perf/power10-pmu.c
index 88c5430..bc3d4dd 100644
--- a/arch/powerpc/perf/power10-pmu.c
+++ b/arch/powerpc/perf/power10-pmu.c
@@ -129,6 +129,31 @@ static int power10_get_alternatives(u64 event, unsigned int flags, u64 alt[])
 CACHE_EVENT_ATTR(dTLB-load-misses,		PM_DTLB_MISS);
 CACHE_EVENT_ATTR(iTLB-load-misses,		PM_ITLB_MISS);
 
+static struct attribute *power10_events_attr_dd1[] = {
+	GENERIC_EVENT_PTR(PM_RUN_CYC),
+	GENERIC_EVENT_PTR(PM_RUN_INST_CMPL),
+	GENERIC_EVENT_PTR(PM_BR_CMPL),
+	GENERIC_EVENT_PTR(PM_BR_MPRED_CMPL),
+	GENERIC_EVENT_PTR(PM_LD_REF_L1),
+	GENERIC_EVENT_PTR(PM_LD_MISS_L1),
+	GENERIC_EVENT_PTR(MEM_LOADS),
+	GENERIC_EVENT_PTR(MEM_STORES),
+	CACHE_EVENT_PTR(PM_LD_MISS_L1),
+	CACHE_EVENT_PTR(PM_LD_REF_L1),
+	CACHE_EVENT_PTR(PM_LD_PREFETCH_CACHE_LINE_MISS),
+	CACHE_EVENT_PTR(PM_ST_MISS_L1),
+	CACHE_EVENT_PTR(PM_L1_ICACHE_MISS),
+	CACHE_EVENT_PTR(PM_INST_FROM_L1),
+	CACHE_EVENT_PTR(PM_IC_PREF_REQ),
+	CACHE_EVENT_PTR(PM_DATA_FROM_L3MISS),
+	CACHE_EVENT_PTR(PM_DATA_FROM_L3),
+	CACHE_EVENT_PTR(PM_BR_MPRED_CMPL),
+	CACHE_EVENT_PTR(PM_BR_CMPL),
+	CACHE_EVENT_PTR(PM_DTLB_MISS),
+	CACHE_EVENT_PTR(PM_ITLB_MISS),
+	NULL
+};
+
 static struct attribute *power10_events_attr[] = {
 	GENERIC_EVENT_PTR(PM_RUN_CYC),
 	GENERIC_EVENT_PTR(PM_RUN_INST_CMPL),
@@ -154,6 +179,11 @@ static int power10_get_alternatives(u64 event, unsigned int flags, u64 alt[])
 	NULL
 };
 
+static struct attribute_group power10_pmu_events_group_dd1 = {
+	.name = "events",
+	.attrs = power10_events_attr_dd1,
+};
+
 static struct attribute_group power10_pmu_events_group = {
 	.name = "events",
 	.attrs = power10_events_attr,
@@ -205,12 +235,27 @@ static int power10_get_alternatives(u64 event, unsigned int flags, u64 alt[])
 	.attrs = power10_pmu_format_attr,
 };
 
+static const struct attribute_group *power10_pmu_attr_groups_dd1[] = {
+	&power10_pmu_format_group,
+	&power10_pmu_events_group_dd1,
+	NULL,
+};
+
 static const struct attribute_group *power10_pmu_attr_groups[] = {
 	&power10_pmu_format_group,
 	&power10_pmu_events_group,
 	NULL,
 };
 
+static int power10_generic_events_dd1[] = {
+	[PERF_COUNT_HW_CPU_CYCLES] =			PM_RUN_CYC,
+	[PERF_COUNT_HW_INSTRUCTIONS] =			PM_RUN_INST_CMPL,
+	[PERF_COUNT_HW_BRANCH_INSTRUCTIONS] =		PM_BR_CMPL,
+	[PERF_COUNT_HW_BRANCH_MISSES] =			PM_BR_MPRED_CMPL,
+	[PERF_COUNT_HW_CACHE_REFERENCES] =		PM_LD_REF_L1,
+	[PERF_COUNT_HW_CACHE_MISSES] =			PM_LD_MISS_L1,
+};
+
 static int power10_generic_events[] = {
 	[PERF_COUNT_HW_CPU_CYCLES] =			PM_RUN_CYC,
 	[PERF_COUNT_HW_INSTRUCTIONS] =			PM_RUN_INST_CMPL,
@@ -276,6 +321,107 @@ static void power10_config_bhrb(u64 pmu_bhrb_filter)
  * 0 means not supported, -1 means nonsensical, other values
  * are event codes.
  */
+static u64 power10_cache_events_dd1[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = {
+	[C(L1D)] = {
+		[C(OP_READ)] = {
+			[C(RESULT_ACCESS)] = PM_LD_REF_L1,
+			[C(RESULT_MISS)] = PM_LD_MISS_L1,
+		},
+		[C(OP_WRITE)] = {
+			[C(RESULT_ACCESS)] = 0,
+			[C(RESULT_MISS)] = PM_ST_MISS_L1,
+		},
+		[C(OP_PREFETCH)] = {
+			[C(RESULT_ACCESS)] = PM_LD_PREFETCH_CACHE_LINE_MISS,
+			[C(RESULT_MISS)] = 0,
+		},
+	},
+	[C(L1I)] = {
+		[C(OP_READ)] = {
+			[C(RESULT_ACCESS)] = PM_INST_FROM_L1,
+			[C(RESULT_MISS)] = PM_L1_ICACHE_MISS,
+		},
+		[C(OP_WRITE)] = {
+			[C(RESULT_ACCESS)] = PM_INST_FROM_L1MISS,
+			[C(RESULT_MISS)] = -1,
+		},
+		[C(OP_PREFETCH)] = {
+			[C(RESULT_ACCESS)] = PM_IC_PREF_REQ,
+			[C(RESULT_MISS)] = 0,
+		},
+	},
+	[C(LL)] = {
+		[C(OP_READ)] = {
+			[C(RESULT_ACCESS)] = PM_DATA_FROM_L3,
+			[C(RESULT_MISS)] = PM_DATA_FROM_L3MISS,
+		},
+		[C(OP_WRITE)] = {
+			[C(RESULT_ACCESS)] = -1,
+			[C(RESULT_MISS)] = -1,
+		},
+		[C(OP_PREFETCH)] = {
+			[C(RESULT_ACCESS)] = -1,
+			[C(RESULT_MISS)] = 0,
+		},
+	},
+	 [C(DTLB)] = {
+		[C(OP_READ)] = {
+			[C(RESULT_ACCESS)] = 0,
+			[C(RESULT_MISS)] = PM_DTLB_MISS,
+		},
+		[C(OP_WRITE)] = {
+			[C(RESULT_ACCESS)] = -1,
+			[C(RESULT_MISS)] = -1,
+		},
+		[C(OP_PREFETCH)] = {
+			[C(RESULT_ACCESS)] = -1,
+			[C(RESULT_MISS)] = -1,
+		},
+	},
+	[C(ITLB)] = {
+		[C(OP_READ)] = {
+			[C(RESULT_ACCESS)] = 0,
+			[C(RESULT_MISS)] = PM_ITLB_MISS,
+		},
+		[C(OP_WRITE)] = {
+			[C(RESULT_ACCESS)] = -1,
+			[C(RESULT_MISS)] = -1,
+		},
+		[C(OP_PREFETCH)] = {
+			[C(RESULT_ACCESS)] = -1,
+			[C(RESULT_MISS)] = -1,
+		},
+	},
+	[C(BPU)] = {
+		[C(OP_READ)] = {
+			[C(RESULT_ACCESS)] = PM_BR_CMPL,
+			[C(RESULT_MISS)] = PM_BR_MPRED_CMPL,
+		},
+		[C(OP_WRITE)] = {
+			[C(RESULT_ACCESS)] = -1,
+			[C(RESULT_MISS)] = -1,
+		},
+		[C(OP_PREFETCH)] = {
+			[C(RESULT_ACCESS)] = -1,
+			[C(RESULT_MISS)] = -1,
+		},
+	},
+	[C(NODE)] = {
+		[C(OP_READ)] = {
+			[C(RESULT_ACCESS)] = -1,
+			[C(RESULT_MISS)] = -1,
+		},
+		[C(OP_WRITE)] = {
+			[C(RESULT_ACCESS)] = -1,
+			[C(RESULT_MISS)] = -1,
+		},
+		[C(OP_PREFETCH)] = {
+			[C(RESULT_ACCESS)] = -1,
+			[C(RESULT_MISS)] = -1,
+		},
+	},
+};
+
 static u64 power10_cache_events[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = {
 	[C(L1D)] = {
 		[C(OP_READ)] = {
@@ -422,6 +568,12 @@ int init_power10_pmu(void)
 	/* Set the PERF_REG_EXTENDED_MASK here */
 	PERF_REG_EXTENDED_MASK = PERF_REG_PMU_MASK_31;
 
+	if ((PVR_CFG(pvr) == 1)) {
+		power10_pmu.generic_events = power10_generic_events_dd1;
+		power10_pmu.attr_groups = power10_pmu_attr_groups_dd1;
+		power10_pmu.cache_events = &power10_cache_events_dd1;
+	}
+
 	rc = register_power_pmu(&power10_pmu);
 	if (rc)
 		return rc;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH V2 5/7] powerpc/perf: Fix to update generic event codes for power10
  2020-11-26 16:54 [PATCH V2 0/7] powerpc/perf: Fixes for power10 PMU Athira Rajeev
                   ` (3 preceding siblings ...)
  2020-11-26 16:54 ` [PATCH V2 4/7] powerpc/perf: Add generic and cache event list for power10 DD1 Athira Rajeev
@ 2020-11-26 16:54 ` Athira Rajeev
  2020-11-26 16:54 ` [PATCH V2 6/7] powerpc/perf: Fix to update cache events with l2l3 events in power10 Athira Rajeev
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Athira Rajeev @ 2020-11-26 16:54 UTC (permalink / raw)
  To: mpe; +Cc: maddy, linuxppc-dev

Fix the event code for events: branch-instructions (to PM_BR_FIN),
branch-misses (to PM_MPRED_BR_FIN) and cache-misses (to
PM_LD_DEMAND_MISS_L1_FIN) for power10 PMU. Update the
list of generic events with this modified event code.

Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
---
 arch/powerpc/perf/power10-events-list.h |  3 +++
 arch/powerpc/perf/power10-pmu.c         | 15 +++++++++------
 2 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/perf/power10-events-list.h b/arch/powerpc/perf/power10-events-list.h
index 60c1b81..abd778f 100644
--- a/arch/powerpc/perf/power10-events-list.h
+++ b/arch/powerpc/perf/power10-events-list.h
@@ -15,6 +15,9 @@
 EVENT(PM_RUN_INST_CMPL,				0x500fa);
 EVENT(PM_BR_CMPL,                               0x4d05e);
 EVENT(PM_BR_MPRED_CMPL,                         0x400f6);
+EVENT(PM_BR_FIN,				0x2f04a);
+EVENT(PM_MPRED_BR_FIN,				0x3e098);
+EVENT(PM_LD_DEMAND_MISS_L1_FIN,			0x400f0);
 
 /* All L1 D cache load references counted at finish, gated by reject */
 EVENT(PM_LD_REF_L1,				0x100fc);
diff --git a/arch/powerpc/perf/power10-pmu.c b/arch/powerpc/perf/power10-pmu.c
index bc3d4dd..a02da69 100644
--- a/arch/powerpc/perf/power10-pmu.c
+++ b/arch/powerpc/perf/power10-pmu.c
@@ -114,6 +114,9 @@ static int power10_get_alternatives(u64 event, unsigned int flags, u64 alt[])
 GENERIC_EVENT_ATTR(cache-misses,		PM_LD_MISS_L1);
 GENERIC_EVENT_ATTR(mem-loads,			MEM_LOADS);
 GENERIC_EVENT_ATTR(mem-stores,			MEM_STORES);
+GENERIC_EVENT_ATTR(branch-instructions,		PM_BR_FIN);
+GENERIC_EVENT_ATTR(branch-misses,		PM_MPRED_BR_FIN);
+GENERIC_EVENT_ATTR(cache-misses,		PM_LD_DEMAND_MISS_L1_FIN);
 
 CACHE_EVENT_ATTR(L1-dcache-load-misses,		PM_LD_MISS_L1);
 CACHE_EVENT_ATTR(L1-dcache-loads,		PM_LD_REF_L1);
@@ -157,10 +160,10 @@ static int power10_get_alternatives(u64 event, unsigned int flags, u64 alt[])
 static struct attribute *power10_events_attr[] = {
 	GENERIC_EVENT_PTR(PM_RUN_CYC),
 	GENERIC_EVENT_PTR(PM_RUN_INST_CMPL),
-	GENERIC_EVENT_PTR(PM_BR_CMPL),
-	GENERIC_EVENT_PTR(PM_BR_MPRED_CMPL),
+	GENERIC_EVENT_PTR(PM_BR_FIN),
+	GENERIC_EVENT_PTR(PM_MPRED_BR_FIN),
 	GENERIC_EVENT_PTR(PM_LD_REF_L1),
-	GENERIC_EVENT_PTR(PM_LD_MISS_L1),
+	GENERIC_EVENT_PTR(PM_LD_DEMAND_MISS_L1_FIN),
 	GENERIC_EVENT_PTR(MEM_LOADS),
 	GENERIC_EVENT_PTR(MEM_STORES),
 	CACHE_EVENT_PTR(PM_LD_MISS_L1),
@@ -259,10 +262,10 @@ static int power10_get_alternatives(u64 event, unsigned int flags, u64 alt[])
 static int power10_generic_events[] = {
 	[PERF_COUNT_HW_CPU_CYCLES] =			PM_RUN_CYC,
 	[PERF_COUNT_HW_INSTRUCTIONS] =			PM_RUN_INST_CMPL,
-	[PERF_COUNT_HW_BRANCH_INSTRUCTIONS] =		PM_BR_CMPL,
-	[PERF_COUNT_HW_BRANCH_MISSES] =			PM_BR_MPRED_CMPL,
+	[PERF_COUNT_HW_BRANCH_INSTRUCTIONS] =		PM_BR_FIN,
+	[PERF_COUNT_HW_BRANCH_MISSES] =			PM_MPRED_BR_FIN,
 	[PERF_COUNT_HW_CACHE_REFERENCES] =		PM_LD_REF_L1,
-	[PERF_COUNT_HW_CACHE_MISSES] =			PM_LD_MISS_L1,
+	[PERF_COUNT_HW_CACHE_MISSES] =			PM_LD_DEMAND_MISS_L1_FIN,
 };
 
 static u64 power10_bhrb_filter_map(u64 branch_sample_type)
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH V2 6/7] powerpc/perf: Fix to update cache events with l2l3 events in power10
  2020-11-26 16:54 [PATCH V2 0/7] powerpc/perf: Fixes for power10 PMU Athira Rajeev
                   ` (4 preceding siblings ...)
  2020-11-26 16:54 ` [PATCH V2 5/7] powerpc/perf: Fix to update generic event codes for power10 Athira Rajeev
@ 2020-11-26 16:54 ` Athira Rajeev
  2020-11-26 16:54 ` [PATCH V2 7/7] powerpc/perf: MMCR0 control for PMU registers under PMCC=00 Athira Rajeev
  2020-12-10 11:29 ` [PATCH V2 0/7] powerpc/perf: Fixes for power10 PMU Michael Ellerman
  7 siblings, 0 replies; 9+ messages in thread
From: Athira Rajeev @ 2020-11-26 16:54 UTC (permalink / raw)
  To: mpe; +Cc: maddy, linuxppc-dev

Export l2l3 events (PM_L2_ST_MISS and PM_L2_ST) and LLC-prefetches
(PM_L3_PF_MISS_L3) via sysfs, and also add these to list of
cache_events.

Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
---
 arch/powerpc/perf/power10-events-list.h |  6 ++++++
 arch/powerpc/perf/power10-pmu.c         | 12 +++++++++---
 2 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/perf/power10-events-list.h b/arch/powerpc/perf/power10-events-list.h
index abd778f..e45dafe 100644
--- a/arch/powerpc/perf/power10-events-list.h
+++ b/arch/powerpc/perf/power10-events-list.h
@@ -39,6 +39,12 @@
 EVENT(PM_DATA_FROM_L3,				0x01340000001c040);
 /* Demand LD - L3 Miss (not L2 hit and not L3 hit) */
 EVENT(PM_DATA_FROM_L3MISS,			0x300fe);
+/* All successful D-side store dispatches for this thread */
+EVENT(PM_L2_ST,					0x010000046080);
+/* All successful D-side store dispatches for this thread that were L2 Miss */
+EVENT(PM_L2_ST_MISS,				0x26880);
+/* Total HW L3 prefetches(Load+store) */
+EVENT(PM_L3_PF_MISS_L3,				0x100000016080);
 /* Data PTEG reload */
 EVENT(PM_DTLB_MISS,				0x300fc);
 /* ITLB Reloaded */
diff --git a/arch/powerpc/perf/power10-pmu.c b/arch/powerpc/perf/power10-pmu.c
index a02da69..79e0206 100644
--- a/arch/powerpc/perf/power10-pmu.c
+++ b/arch/powerpc/perf/power10-pmu.c
@@ -127,6 +127,9 @@ static int power10_get_alternatives(u64 event, unsigned int flags, u64 alt[])
 CACHE_EVENT_ATTR(L1-icache-prefetches,		PM_IC_PREF_REQ);
 CACHE_EVENT_ATTR(LLC-load-misses,		PM_DATA_FROM_L3MISS);
 CACHE_EVENT_ATTR(LLC-loads,			PM_DATA_FROM_L3);
+CACHE_EVENT_ATTR(LLC-prefetches,		PM_L3_PF_MISS_L3);
+CACHE_EVENT_ATTR(LLC-store-misses,		PM_L2_ST_MISS);
+CACHE_EVENT_ATTR(LLC-stores,			PM_L2_ST);
 CACHE_EVENT_ATTR(branch-load-misses,		PM_BR_MPRED_CMPL);
 CACHE_EVENT_ATTR(branch-loads,			PM_BR_CMPL);
 CACHE_EVENT_ATTR(dTLB-load-misses,		PM_DTLB_MISS);
@@ -175,6 +178,9 @@ static int power10_get_alternatives(u64 event, unsigned int flags, u64 alt[])
 	CACHE_EVENT_PTR(PM_IC_PREF_REQ),
 	CACHE_EVENT_PTR(PM_DATA_FROM_L3MISS),
 	CACHE_EVENT_PTR(PM_DATA_FROM_L3),
+	CACHE_EVENT_PTR(PM_L3_PF_MISS_L3),
+	CACHE_EVENT_PTR(PM_L2_ST_MISS),
+	CACHE_EVENT_PTR(PM_L2_ST),
 	CACHE_EVENT_PTR(PM_BR_MPRED_CMPL),
 	CACHE_EVENT_PTR(PM_BR_CMPL),
 	CACHE_EVENT_PTR(PM_DTLB_MISS),
@@ -460,11 +466,11 @@ static void power10_config_bhrb(u64 pmu_bhrb_filter)
 			[C(RESULT_MISS)] = PM_DATA_FROM_L3MISS,
 		},
 		[C(OP_WRITE)] = {
-			[C(RESULT_ACCESS)] = -1,
-			[C(RESULT_MISS)] = -1,
+			[C(RESULT_ACCESS)] = PM_L2_ST,
+			[C(RESULT_MISS)] = PM_L2_ST_MISS,
 		},
 		[C(OP_PREFETCH)] = {
-			[C(RESULT_ACCESS)] = -1,
+			[C(RESULT_ACCESS)] = PM_L3_PF_MISS_L3,
 			[C(RESULT_MISS)] = 0,
 		},
 	},
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH V2 7/7] powerpc/perf: MMCR0 control for PMU registers under PMCC=00
  2020-11-26 16:54 [PATCH V2 0/7] powerpc/perf: Fixes for power10 PMU Athira Rajeev
                   ` (5 preceding siblings ...)
  2020-11-26 16:54 ` [PATCH V2 6/7] powerpc/perf: Fix to update cache events with l2l3 events in power10 Athira Rajeev
@ 2020-11-26 16:54 ` Athira Rajeev
  2020-12-10 11:29 ` [PATCH V2 0/7] powerpc/perf: Fixes for power10 PMU Michael Ellerman
  7 siblings, 0 replies; 9+ messages in thread
From: Athira Rajeev @ 2020-11-26 16:54 UTC (permalink / raw)
  To: mpe; +Cc: maddy, linuxppc-dev

PowerISA v3.1 introduces new control bit (PMCCEXT) for restricting
access to group B PMU registers in problem state when
MMCR0 PMCC=0b00. In problem state and when MMCR0 PMCC=0b00,
setting the Monitor Mode Control Register bit 54 (MMCR0 PMCCEXT),
will restrict read permission on Group B Performance Monitor
Registers (SIER, SIAR, SDAR and MMCR1). When this bit is set to zero,
group B registers will be readable. In other platforms (like power9),
the older behaviour is retained where group B PMU SPRs are readable.

Patch adds support for MMCR0 PMCCEXT bit in power10 by enabling
this bit during boot and during the PMU event enable/disable callback
functions.

Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/reg.h        | 1 +
 arch/powerpc/kernel/cpu_setup_power.c | 1 +
 arch/powerpc/kernel/dt_cpu_ftrs.c     | 1 +
 arch/powerpc/perf/core-book3s.c       | 4 ++++
 arch/powerpc/perf/isa207-common.c     | 8 ++++++++
 5 files changed, 15 insertions(+)

diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index f877a57..cba9965 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -864,6 +864,7 @@
 #define   MMCR0_BHRBA	0x00200000UL /* BHRB Access allowed in userspace */
 #define   MMCR0_EBE	0x00100000UL /* Event based branch enable */
 #define   MMCR0_PMCC	0x000c0000UL /* PMC control */
+#define   MMCR0_PMCCEXT	ASM_CONST(0x00000200) /* PMCCEXT control */
 #define   MMCR0_PMCC_U6	0x00080000UL /* PMC1-6 are R/W by user (PR) */
 #define   MMCR0_PMC1CE	0x00008000UL /* PMC1 count enable*/
 #define   MMCR0_PMCjCE	ASM_CONST(0x00004000) /* PMCj count enable*/
diff --git a/arch/powerpc/kernel/cpu_setup_power.c b/arch/powerpc/kernel/cpu_setup_power.c
index 0c2191e..3cca88e 100644
--- a/arch/powerpc/kernel/cpu_setup_power.c
+++ b/arch/powerpc/kernel/cpu_setup_power.c
@@ -123,6 +123,7 @@ static void init_PMU_ISA31(void)
 {
 	mtspr(SPRN_MMCR3, 0);
 	mtspr(SPRN_MMCRA, MMCRA_BHRB_DISABLE);
+	mtspr(SPRN_MMCR0, MMCR0_PMCCEXT);
 }
 
 /*
diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
index 1098863..9d07965 100644
--- a/arch/powerpc/kernel/dt_cpu_ftrs.c
+++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
@@ -454,6 +454,7 @@ static void init_pmu_power10(void)
 
 	mtspr(SPRN_MMCR3, 0);
 	mtspr(SPRN_MMCRA, MMCRA_BHRB_DISABLE);
+	mtspr(SPRN_MMCR0, MMCR0_PMCCEXT);
 }
 
 static int __init feat_enable_pmu_power10(struct dt_cpu_feature *f)
diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
index 3c8c6ce..35cf93c 100644
--- a/arch/powerpc/perf/core-book3s.c
+++ b/arch/powerpc/perf/core-book3s.c
@@ -95,6 +95,7 @@ struct cpu_hw_events {
 #define SPRN_SIER3		0
 #define MMCRA_SAMPLE_ENABLE	0
 #define MMCRA_BHRB_DISABLE     0
+#define MMCR0_PMCCEXT		0
 
 static inline unsigned long perf_ip_adjust(struct pt_regs *regs)
 {
@@ -1270,6 +1271,9 @@ static void power_pmu_disable(struct pmu *pmu)
 		val |= MMCR0_FC;
 		val &= ~(MMCR0_EBE | MMCR0_BHRBA | MMCR0_PMCC | MMCR0_PMAO |
 			 MMCR0_FC56);
+		/* Set mmcr0 PMCCEXT for p10 */
+		if (ppmu->flags & PPMU_ARCH_31)
+			val |= MMCR0_PMCCEXT;
 
 		/*
 		 * The barrier is to make sure the mtspr has been
diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c
index 0f4983e..24f0a90 100644
--- a/arch/powerpc/perf/isa207-common.c
+++ b/arch/powerpc/perf/isa207-common.c
@@ -558,6 +558,14 @@ int isa207_compute_mmcr(u64 event[], int n_ev,
 	if (!(pmc_inuse & 0x60))
 		mmcr->mmcr0 |= MMCR0_FC56;
 
+	/*
+	 * Set mmcr0 (PMCCEXT) for p10 which
+	 * will restrict access to group B registers
+	 * when MMCR0 PMCC=0b00.
+	 */
+	if (cpu_has_feature(CPU_FTR_ARCH_31))
+		mmcr->mmcr0 |= MMCR0_PMCCEXT;
+
 	mmcr->mmcr1 = mmcr1;
 	mmcr->mmcra = mmcra;
 	mmcr->mmcr2 = mmcr2;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH V2 0/7] powerpc/perf: Fixes for power10 PMU
  2020-11-26 16:54 [PATCH V2 0/7] powerpc/perf: Fixes for power10 PMU Athira Rajeev
                   ` (6 preceding siblings ...)
  2020-11-26 16:54 ` [PATCH V2 7/7] powerpc/perf: MMCR0 control for PMU registers under PMCC=00 Athira Rajeev
@ 2020-12-10 11:29 ` Michael Ellerman
  7 siblings, 0 replies; 9+ messages in thread
From: Michael Ellerman @ 2020-12-10 11:29 UTC (permalink / raw)
  To: mpe, Athira Rajeev; +Cc: maddy, linuxppc-dev

On Thu, 26 Nov 2020 11:54:37 -0500, Athira Rajeev wrote:
> Patchset contains PMU fixes for power10.
> 
> This patchset contains 7 patches.
> Patch1 includes fix to update event code with radix_scope_qual
> bit in power10.
> Patch2 and Patch3 updates the event group constraints for L2/L3
> and threshold events in power10.
> Patch4, patch5 and patch6 includes the event code changes for
> l2/l3 events and some of the generic events.
> Patch7 adds fixes for PMCCEXT bit in power10.
> 
> [...]

Applied to powerpc/next.

[1/7] powerpc/perf: Fix to update radix_scope_qual in power10
      https://git.kernel.org/powerpc/c/d3afd28cd2f35b2a1046b76e0cf010b684da2e84
[2/7] powerpc/perf: Update the PMU group constraints for l2l3 events in power10
      https://git.kernel.org/powerpc/c/e924be7b0b0d1f37d0509c854a92c7a71e3cdfe7
[3/7] powerpc/perf: Fix the PMU group constraints for threshold events in power10
      https://git.kernel.org/powerpc/c/0263bbb377af0c2d38bc8b2ad2ff147e240094de
[4/7] powerpc/perf: Add generic and cache event list for power10 DD1
      https://git.kernel.org/powerpc/c/c0e3985790251b307b7b71b687ed0128741b3f34
[5/7] powerpc/perf: Fix to update generic event codes for power10
      https://git.kernel.org/powerpc/c/1f12316394e3b241e70ed620ca846002c8ace3ec
[6/7] powerpc/perf: Fix to update cache events with l2l3 events in power10
      https://git.kernel.org/powerpc/c/9a8ee52634235993273c43ef67669d8168497dd7
[7/7] powerpc/perf: MMCR0 control for PMU registers under PMCC=00
      https://git.kernel.org/powerpc/c/91668ab7db4bcfae332e561df1de2401f3f18553

cheers

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-12-10 12:07 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-26 16:54 [PATCH V2 0/7] powerpc/perf: Fixes for power10 PMU Athira Rajeev
2020-11-26 16:54 ` [PATCH V2 1/7] powerpc/perf: Fix to update radix_scope_qual in power10 Athira Rajeev
2020-11-26 16:54 ` [PATCH V2 2/7] powerpc/perf: Update the PMU group constraints for l2l3 events " Athira Rajeev
2020-11-26 16:54 ` [PATCH V2 3/7] powerpc/perf: Fix the PMU group constraints for threshold " Athira Rajeev
2020-11-26 16:54 ` [PATCH V2 4/7] powerpc/perf: Add generic and cache event list for power10 DD1 Athira Rajeev
2020-11-26 16:54 ` [PATCH V2 5/7] powerpc/perf: Fix to update generic event codes for power10 Athira Rajeev
2020-11-26 16:54 ` [PATCH V2 6/7] powerpc/perf: Fix to update cache events with l2l3 events in power10 Athira Rajeev
2020-11-26 16:54 ` [PATCH V2 7/7] powerpc/perf: MMCR0 control for PMU registers under PMCC=00 Athira Rajeev
2020-12-10 11:29 ` [PATCH V2 0/7] powerpc/perf: Fixes for power10 PMU Michael Ellerman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.