linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/6] perf: Support for ARM DynamIQ Shared Unit
@ 2017-08-08 11:37 Suzuki K Poulose
  2017-08-08 11:37 ` [PATCH v5 1/6] perf: Export perf_event_update_userpage Suzuki K Poulose
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Suzuki K Poulose @ 2017-08-08 11:37 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will.deacon, marc.zyngier, mark.rutland, devicetree,
	linux-kernel, robh, frowand.list, mathieu.poirier, peterz,
	Suzuki K Poulose

This series adds support for the PMU in ARM DynamIQ Shared Unit (DSU).
The DSU integrates one or more cores with an L3 memory system, control
logic, and external interfaces to form a multicore cluster. The PMU
allows counting the various events related to L3, SCU etc, using 32bit
independent counters along with providing a 64bit cycle counter.

The PMU can only be accessed via CPU system registers, which are common
to the cores connected to the same DSU. The PMU registers follow the
semantics of the ARMv8 PMU, mostly, with the exception that
the counters record the cluster wide events.

Tested on a Fast model with DSU. The driver only supports ARM64 at the
moment. It can be extended to support ARM32 by providing register
accessors like we do in arch/arm64/include/arm_dsu_pmu.h.

The firmware should setup appropriate bits in the ACTLR_EL3/EL2 to
allow EL1 access to the PMU registers.

Series applies on v4.13-rc4 and is also available at:

  git://linux-arm.org/linux-skp.git 4.13/dsu-v5

Changes since V4:
 - Fix regressions introduced by v4, with the rename of generic
   helper.
 - Added reviewed-by tag from Rob

Changes since V3:
 - Rename the of generic helper to of_cpu_node_to_id(), and return
   -ENODEV upon failure than nr_cpus_id
 - Fix node name in device tree node example.

Changes since V2:
 - Cleanup dsu_pmu_device_probe error handling.
 - Fix event validate_group to invert the result check of validate_event
 - Return errors if we failed to parse CPUs in the DSU.
 - Add MODULE_DEVICE_TABLE entry
 - Use hlist_entry_safe for converting cpuhp_node to dsu_pmu.
 - Added Reviews and Acks.

Changes since V1:
 - Use the new of_device_node_get_cpu() helper for Coresight
 - Rebased to 4.13-rc2

Suzuki K Poulose (6):
  perf: Export perf_event_update_userpage
  of: Add helper for mapping device node to logical CPU number
  coresight: of: Use of_cpu_node_to_id helper
  irqchip: gic-v3: Use of_cpu_node_to_id helper
  dt-bindings: Document devicetree binding for ARM DSU PMU
  perf: ARM DynamIQ Shared Unit PMU support

 .../devicetree/bindings/arm/arm-dsu-pmu.txt        |  27 +
 arch/arm64/include/asm/arm_dsu_pmu.h               | 124 +++
 drivers/hwtracing/coresight/of_coresight.c         |  15 +-
 drivers/irqchip/irq-gic-v3.c                       |  29 +-
 drivers/of/base.c                                  |  26 +
 drivers/perf/Kconfig                               |   9 +
 drivers/perf/Makefile                              |   1 +
 drivers/perf/arm_dsu_pmu.c                         | 883 +++++++++++++++++++++
 include/linux/of.h                                 |   7 +
 kernel/events/core.c                               |   1 +
 10 files changed, 1083 insertions(+), 39 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/arm/arm-dsu-pmu.txt
 create mode 100644 arch/arm64/include/asm/arm_dsu_pmu.h
 create mode 100644 drivers/perf/arm_dsu_pmu.c

-- 
2.7.5

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v5 1/6] perf: Export perf_event_update_userpage
  2017-08-08 11:37 [PATCH v5 0/6] perf: Support for ARM DynamIQ Shared Unit Suzuki K Poulose
@ 2017-08-08 11:37 ` Suzuki K Poulose
  2017-08-08 11:37 ` [PATCH v5 2/6] of: Add helper for mapping device node to logical CPU number Suzuki K Poulose
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Suzuki K Poulose @ 2017-08-08 11:37 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will.deacon, marc.zyngier, mark.rutland, devicetree,
	linux-kernel, robh, frowand.list, mathieu.poirier, peterz,
	Suzuki K Poulose

Export perf_event_update_userpage() so that PMU driver using them,
can be built as modules.

Cc: Peter Zilstra <peterz@infradead.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 kernel/events/core.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 426c2ff..21aad7a 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -4946,6 +4946,7 @@ void perf_event_update_userpage(struct perf_event *event)
 unlock:
 	rcu_read_unlock();
 }
+EXPORT_SYMBOL_GPL(perf_event_update_userpage);
 
 static int perf_mmap_fault(struct vm_fault *vmf)
 {
-- 
2.7.5

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v5 2/6] of: Add helper for mapping device node to logical CPU number
  2017-08-08 11:37 [PATCH v5 0/6] perf: Support for ARM DynamIQ Shared Unit Suzuki K Poulose
  2017-08-08 11:37 ` [PATCH v5 1/6] perf: Export perf_event_update_userpage Suzuki K Poulose
@ 2017-08-08 11:37 ` Suzuki K Poulose
  2017-08-08 11:37 ` [PATCH v5 3/6] coresight: of: Use of_cpu_node_to_id helper Suzuki K Poulose
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Suzuki K Poulose @ 2017-08-08 11:37 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will.deacon, marc.zyngier, mark.rutland, devicetree,
	linux-kernel, robh, frowand.list, mathieu.poirier, peterz,
	Suzuki K Poulose, Sudeep Holla

Add a helper to map a device node to a logical CPU number to avoid
duplication. Currently this is open coded in different places (e.g
gic-v3, coresight). The helper tries to map device node to a "possible"
logical CPU id, which may not be online yet. It is the responsibility
of the user to make sure that the CPU is online. The helper uses
of_get_cpu_node() which uses arch specific backends to match the phyiscal
ids.

Cc: devicetree@vger.kernel.org
Cc: Frank Rowand <frowand.list@gmail.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Sudeep Holla <sudeep.holla@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
Changes since V3:
 - Renamed the helper to of_cpu_node_to_id(), suggested by Rob
 - Return -ENODEV on failure than nr_cpus_id
---
 drivers/of/base.c  | 26 ++++++++++++++++++++++++++
 include/linux/of.h |  7 +++++++
 2 files changed, 33 insertions(+)

diff --git a/drivers/of/base.c b/drivers/of/base.c
index 686628d..e8625ff 100644
--- a/drivers/of/base.c
+++ b/drivers/of/base.c
@@ -420,6 +420,32 @@ struct device_node *of_get_cpu_node(int cpu, unsigned int *thread)
 EXPORT_SYMBOL(of_get_cpu_node);
 
 /**
+ * of_cpu_node_to_id: Get the logical CPU number for a given device_node
+ *
+ * @cpu_node: Pointer to the device_node for CPU.
+ *
+ * Returns the logical CPU number of the given CPU device_node.
+ * Returns -ENODEV if the CPU is not found.
+ */
+int of_cpu_node_to_id(struct device_node *cpu_node)
+{
+	int cpu;
+	bool found = false;
+	struct device_node *np;
+
+	for_each_possible_cpu(cpu) {
+		np = of_get_cpu_node(cpu, NULL);
+		found = (cpu_node == np);
+		of_node_put(np);
+		if (found)
+			return cpu;
+	}
+
+	return -ENODEV;
+}
+EXPORT_SYMBOL(of_cpu_node_to_id);
+
+/**
  * __of_device_is_compatible() - Check if the node matches given constraints
  * @device: pointer to node
  * @compat: required compatible string, NULL or "" for any match
diff --git a/include/linux/of.h b/include/linux/of.h
index 4a8a709..23cf02d 100644
--- a/include/linux/of.h
+++ b/include/linux/of.h
@@ -539,6 +539,8 @@ const char *of_prop_next_string(struct property *prop, const char *cur);
 
 bool of_console_check(struct device_node *dn, char *name, int index);
 
+extern int of_cpu_node_to_id(struct device_node *np);
+
 #else /* CONFIG_OF */
 
 static inline void of_core_init(void)
@@ -863,6 +865,11 @@ static inline void of_property_clear_flag(struct property *p, unsigned long flag
 {
 }
 
+static inline int of_cpu_node_to_id(struct device_node *np)
+{
+	return -ENODEV;
+}
+
 #define of_match_ptr(_ptr)	NULL
 #define of_match_node(_matches, _node)	NULL
 #endif /* CONFIG_OF */
-- 
2.7.5

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v5 3/6] coresight: of: Use of_cpu_node_to_id helper
  2017-08-08 11:37 [PATCH v5 0/6] perf: Support for ARM DynamIQ Shared Unit Suzuki K Poulose
  2017-08-08 11:37 ` [PATCH v5 1/6] perf: Export perf_event_update_userpage Suzuki K Poulose
  2017-08-08 11:37 ` [PATCH v5 2/6] of: Add helper for mapping device node to logical CPU number Suzuki K Poulose
@ 2017-08-08 11:37 ` Suzuki K Poulose
  2017-08-08 11:37 ` [PATCH v5 4/6] irqchip: gic-v3: " Suzuki K Poulose
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Suzuki K Poulose @ 2017-08-08 11:37 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will.deacon, marc.zyngier, mark.rutland, devicetree,
	linux-kernel, robh, frowand.list, mathieu.poirier, peterz,
	Suzuki K Poulose, Leo Yan

Reuse the new generic helper, of_cpu_node_to_id() to map a
given CPU phandle to a logical CPU number.

Cc: Leo Yan <leo.yan@linaro.org>
Acked-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
Changes since V4:
 - Fix a regression introduced in v4, reported by bugrobot
Changes since V3:
 - Reflect the renaming of the helper and return value changes
---
 drivers/hwtracing/coresight/of_coresight.c | 15 +++------------
 1 file changed, 3 insertions(+), 12 deletions(-)

diff --git a/drivers/hwtracing/coresight/of_coresight.c b/drivers/hwtracing/coresight/of_coresight.c
index a187941..7c37544 100644
--- a/drivers/hwtracing/coresight/of_coresight.c
+++ b/drivers/hwtracing/coresight/of_coresight.c
@@ -104,26 +104,17 @@ static int of_coresight_alloc_memory(struct device *dev,
 int of_coresight_get_cpu(const struct device_node *node)
 {
 	int cpu;
-	bool found;
-	struct device_node *dn, *np;
+	struct device_node *dn;
 
 	dn = of_parse_phandle(node, "cpu", 0);
-
 	/* Affinity defaults to CPU0 */
 	if (!dn)
 		return 0;
-
-	for_each_possible_cpu(cpu) {
-		np = of_cpu_device_node_get(cpu);
-		found = (dn == np);
-		of_node_put(np);
-		if (found)
-			break;
-	}
+	cpu = of_cpu_node_to_id(dn);
 	of_node_put(dn);
 
 	/* Affinity to CPU0 if no cpu nodes are found */
-	return found ? cpu : 0;
+	return (cpu < 0) ? 0 : cpu;
 }
 EXPORT_SYMBOL_GPL(of_coresight_get_cpu);
 
-- 
2.7.5

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v5 4/6] irqchip: gic-v3: Use of_cpu_node_to_id helper
  2017-08-08 11:37 [PATCH v5 0/6] perf: Support for ARM DynamIQ Shared Unit Suzuki K Poulose
                   ` (2 preceding siblings ...)
  2017-08-08 11:37 ` [PATCH v5 3/6] coresight: of: Use of_cpu_node_to_id helper Suzuki K Poulose
@ 2017-08-08 11:37 ` Suzuki K Poulose
  2017-08-08 11:37 ` [PATCH v5 5/6] dt-bindings: Document devicetree binding for ARM DSU PMU Suzuki K Poulose
  2017-08-08 11:37 ` [PATCH v5 6/6] perf: ARM DynamIQ Shared Unit PMU support Suzuki K Poulose
  5 siblings, 0 replies; 13+ messages in thread
From: Suzuki K Poulose @ 2017-08-08 11:37 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will.deacon, marc.zyngier, mark.rutland, devicetree,
	linux-kernel, robh, frowand.list, mathieu.poirier, peterz,
	Suzuki K Poulose

Use the new generic helper of_cpu_node_to_id() instead
of using our own version to map a device node to logical CPU
number.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
Changes since V3:
 - Reflect the change in the helper name and return value.
---
 drivers/irqchip/irq-gic-v3.c | 29 ++---------------------------
 1 file changed, 2 insertions(+), 27 deletions(-)

diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index dbffb7a..ada931d 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -978,31 +978,6 @@ static int __init gic_validate_dist_version(void __iomem *dist_base)
 	return 0;
 }
 
-static int get_cpu_number(struct device_node *dn)
-{
-	const __be32 *cell;
-	u64 hwid;
-	int i;
-
-	cell = of_get_property(dn, "reg", NULL);
-	if (!cell)
-		return -1;
-
-	hwid = of_read_number(cell, of_n_addr_cells(dn));
-
-	/*
-	 * Non affinity bits must be set to 0 in the DT
-	 */
-	if (hwid & ~MPIDR_HWID_BITMASK)
-		return -1;
-
-	for (i = 0; i < num_possible_cpus(); i++)
-		if (cpu_logical_map(i) == hwid)
-			return i;
-
-	return -1;
-}
-
 /* Create all possible partitions at boot time */
 static void __init gic_populate_ppi_partitions(struct device_node *gic_node)
 {
@@ -1053,8 +1028,8 @@ static void __init gic_populate_ppi_partitions(struct device_node *gic_node)
 			if (WARN_ON(!cpu_node))
 				continue;
 
-			cpu = get_cpu_number(cpu_node);
-			if (WARN_ON(cpu == -1))
+			cpu = of_cpu_node_to_id(cpu_node);
+			if (WARN_ON(cpu < 0))
 				continue;
 
 			pr_cont("%s[%d] ", cpu_node->full_name, cpu);
-- 
2.7.5

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v5 5/6] dt-bindings: Document devicetree binding for ARM DSU PMU
  2017-08-08 11:37 [PATCH v5 0/6] perf: Support for ARM DynamIQ Shared Unit Suzuki K Poulose
                   ` (3 preceding siblings ...)
  2017-08-08 11:37 ` [PATCH v5 4/6] irqchip: gic-v3: " Suzuki K Poulose
@ 2017-08-08 11:37 ` Suzuki K Poulose
  2017-08-17 15:09   ` Rob Herring
  2017-08-08 11:37 ` [PATCH v5 6/6] perf: ARM DynamIQ Shared Unit PMU support Suzuki K Poulose
  5 siblings, 1 reply; 13+ messages in thread
From: Suzuki K Poulose @ 2017-08-08 11:37 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will.deacon, marc.zyngier, mark.rutland, devicetree,
	linux-kernel, robh, frowand.list, mathieu.poirier, peterz,
	Suzuki K Poulose

This patch documents the devicetree bindings for ARM DSU PMU.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Rob Herring <robh@kernel.org>
Cc: devicetree@vger.kernel.org
Cc: frowand.list@gmail.com
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
Changes since V3:
 - Fixed node name in the example, suggested by Rob
---
 .../devicetree/bindings/arm/arm-dsu-pmu.txt        | 27 ++++++++++++++++++++++
 1 file changed, 27 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/arm/arm-dsu-pmu.txt

diff --git a/Documentation/devicetree/bindings/arm/arm-dsu-pmu.txt b/Documentation/devicetree/bindings/arm/arm-dsu-pmu.txt
new file mode 100644
index 0000000..6efabba
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/arm-dsu-pmu.txt
@@ -0,0 +1,27 @@
+* ARM DynamIQ Shared Unit (DSU) Performance Monitor Unit (PMU)
+
+ARM DyanmIQ Shared Unit (DSU) integrates one or more CPU cores
+with a shared L3 memory system, control logic and external interfaces to
+form a multicore cluster. The PMU enables to gather various statistics on
+the operations of the DSU. The PMU provides independent 32bit counters that
+can count any of the supported events, along with a 64bit cycle counter.
+The PMU is accessed via CPU system registers and has no MMIO component.
+
+** DSU PMU required properties:
+
+- compatible	: should be one of :
+
+		"arm,dsu-pmu"
+
+- interrupts	: Exactly 1 SPI must be listed.
+
+- cpus		: List of phandles for the CPUs connected to this DSU instance.
+
+
+** Example:
+
+dsu-pmu-0 {
+	compatible = "arm,dsu-pmu";
+	interrupts = <GIC_SPI 02 IRQ_TYPE_LEVEL_HIGH>;
+	cpus = <&cpu_0>, <&cpu_1>;
+};
-- 
2.7.5

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v5 6/6] perf: ARM DynamIQ Shared Unit PMU support
  2017-08-08 11:37 [PATCH v5 0/6] perf: Support for ARM DynamIQ Shared Unit Suzuki K Poulose
                   ` (4 preceding siblings ...)
  2017-08-08 11:37 ` [PATCH v5 5/6] dt-bindings: Document devicetree binding for ARM DSU PMU Suzuki K Poulose
@ 2017-08-08 11:37 ` Suzuki K Poulose
  2017-08-16 14:10   ` Mark Rutland
  5 siblings, 1 reply; 13+ messages in thread
From: Suzuki K Poulose @ 2017-08-08 11:37 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will.deacon, marc.zyngier, mark.rutland, devicetree,
	linux-kernel, robh, frowand.list, mathieu.poirier, peterz,
	Suzuki K Poulose

Add support for the Cluster PMU part of the ARM DynamIQ Shared Unit (DSU).
The DSU integrates one or more cores with an L3 memory system, control
logic, and external interfaces to form a multicore cluster. The PMU
allows counting the various events related to L3, SCU etc, along with
providing a cycle counter.

The PMU can be accessed via system registers, which are common
to the cores in the same cluster. The PMU registers follow the
semantics of the ARMv8 PMU, mostly, with the exception that
the counters record the cluster wide events.

This driver is mostly based on the ARMv8 and CCI PMU drivers.
The driver only supports ARM64 at the moment. It can be extended
to support ARM32 by providing register accessors like we do in
arch/arm64/include/arm_dsu_pmu.h.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
Changes since V4:
 - Reflect the changed generic helper for mapping CPU id
Changes since V2:
 - Cleanup dsu_pmu_device_probe error handling.
 - Fix event validate_group to invert the result check of validate_event
 - Return errors if we failed to parse CPUs in the DSU.
 - Add MODULE_DEVICE_TABLE entry
 - Use hlist_entry_safe for converting cpuhp_node to dsu_pmu.
---
 arch/arm64/include/asm/arm_dsu_pmu.h | 124 +++++
 drivers/perf/Kconfig                 |   9 +
 drivers/perf/Makefile                |   1 +
 drivers/perf/arm_dsu_pmu.c           | 883 +++++++++++++++++++++++++++++++++++
 4 files changed, 1017 insertions(+)
 create mode 100644 arch/arm64/include/asm/arm_dsu_pmu.h
 create mode 100644 drivers/perf/arm_dsu_pmu.c

diff --git a/arch/arm64/include/asm/arm_dsu_pmu.h b/arch/arm64/include/asm/arm_dsu_pmu.h
new file mode 100644
index 0000000..e7276fd
--- /dev/null
+++ b/arch/arm64/include/asm/arm_dsu_pmu.h
@@ -0,0 +1,124 @@
+/*
+ * ARM DynamIQ Shared Unit (DSU) PMU Low level register access routines.
+ *
+ * Copyright (C) ARM Limited, 2017.
+ *
+ * Author: Suzuki K Poulose <suzuki.poulose@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2, as published by the Free Software Foundation.
+ */
+
+#include <asm/sysreg.h>
+
+
+#define CLUSTERPMCR_EL1			sys_reg(3, 0, 15, 5, 0)
+#define CLUSTERPMCNTENSET_EL1		sys_reg(3, 0, 15, 5, 1)
+#define CLUSTERPMCNTENCLR_EL1		sys_reg(3, 0, 15, 5, 2)
+#define CLUSTERPMOVSSET_EL1		sys_reg(3, 0, 15, 5, 3)
+#define CLUSTERPMOVSCLR_EL1		sys_reg(3, 0, 15, 5, 4)
+#define CLUSTERPMSELR_EL1		sys_reg(3, 0, 15, 5, 5)
+#define CLUSTERPMINTENSET_EL1		sys_reg(3, 0, 15, 5, 6)
+#define CLUSTERPMINTENCLR_EL1		sys_reg(3, 0, 15, 5, 7)
+#define CLUSTERPMCCNTR_EL1		sys_reg(3, 0, 15, 6, 0)
+#define CLUSTERPMXEVTYPER_EL1		sys_reg(3, 0, 15, 6, 1)
+#define CLUSTERPMXEVCNTR_EL1		sys_reg(3, 0, 15, 6, 2)
+#define CLUSTERPMMDCR_EL1		sys_reg(3, 0, 15, 6, 3)
+#define CLUSTERPMCEID0_EL1		sys_reg(3, 0, 15, 6, 4)
+#define CLUSTERPMCEID1_EL1		sys_reg(3, 0, 15, 6, 5)
+
+static inline u32 __dsu_pmu_read_pmcr(void)
+{
+	return read_sysreg_s(CLUSTERPMCR_EL1);
+}
+
+static inline void __dsu_pmu_write_pmcr(u32 val)
+{
+	write_sysreg_s(val, CLUSTERPMCR_EL1);
+	isb();
+}
+
+static inline u32 __dsu_pmu_get_pmovsclr(void)
+{
+	u32 val = read_sysreg_s(CLUSTERPMOVSCLR_EL1);
+	/* Clear the bit */
+	write_sysreg_s(val, CLUSTERPMOVSCLR_EL1);
+	isb();
+	return val;
+}
+
+static inline void __dsu_pmu_select_counter(int counter)
+{
+	write_sysreg_s(counter, CLUSTERPMSELR_EL1);
+	isb();
+}
+
+static inline u64 __dsu_pmu_read_counter(int counter)
+{
+	__dsu_pmu_select_counter(counter);
+	return read_sysreg_s(CLUSTERPMXEVCNTR_EL1);
+}
+
+static inline void __dsu_pmu_write_counter(int counter, u64 val)
+{
+	__dsu_pmu_select_counter(counter);
+	write_sysreg_s(val, CLUSTERPMXEVCNTR_EL1);
+	isb();
+}
+
+static inline void __dsu_pmu_set_event(int counter, u32 event)
+{
+	__dsu_pmu_select_counter(counter);
+	write_sysreg_s(event, CLUSTERPMXEVTYPER_EL1);
+	isb();
+}
+
+static inline u64 __dsu_pmu_read_pmccntr(void)
+{
+	return read_sysreg_s(CLUSTERPMCCNTR_EL1);
+}
+
+static inline void __dsu_pmu_write_pmccntr(u64 val)
+{
+	write_sysreg_s(val, CLUSTERPMCCNTR_EL1);
+	isb();
+}
+
+static inline void __dsu_pmu_disable_counter(int counter)
+{
+	write_sysreg_s(BIT(counter), CLUSTERPMCNTENCLR_EL1);
+	isb();
+}
+
+static inline void __dsu_pmu_enable_counter(int counter)
+{
+	write_sysreg_s(BIT(counter), CLUSTERPMCNTENSET_EL1);
+	isb();
+}
+
+static inline void __dsu_pmu_counter_interrupt_enable(int counter)
+{
+	write_sysreg_s(BIT(counter), CLUSTERPMINTENSET_EL1);
+	isb();
+}
+
+static inline void __dsu_pmu_counter_interrupt_disable(int counter)
+{
+	write_sysreg_s(BIT(counter), CLUSTERPMINTENCLR_EL1);
+	isb();
+}
+
+
+static inline u32 __dsu_pmu_read_pmceid(int n)
+{
+	switch (n) {
+	case 0:
+		return read_sysreg_s(CLUSTERPMCEID0_EL1);
+	case 1:
+		return read_sysreg_s(CLUSTERPMCEID1_EL1);
+	default:
+		BUILD_BUG();
+		return 0;
+	}
+}
diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig
index e5197ff..ee3d7d1 100644
--- a/drivers/perf/Kconfig
+++ b/drivers/perf/Kconfig
@@ -17,6 +17,15 @@ config ARM_PMU_ACPI
 	depends on ARM_PMU && ACPI
 	def_bool y
 
+config ARM_DSU_PMU
+	tristate "ARM DynamIQ Shared Unit (DSU) PMU"
+	depends on ARM64 && PERF_EVENTS
+	  help
+	  Provides support for performance monitor unit in ARM DynamIQ Shared
+	  Unit (DSU). The DSU integrates one or more cores  with an L3 memory
+	  system, control logic. The PMU allows counting various events related
+	  to DSU.
+
 config QCOM_L2_PMU
 	bool "Qualcomm Technologies L2-cache PMU"
 	depends on ARCH_QCOM && ARM64 && ACPI
diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile
index 6420bd4..0adb4f6 100644
--- a/drivers/perf/Makefile
+++ b/drivers/perf/Makefile
@@ -1,5 +1,6 @@
 obj-$(CONFIG_ARM_PMU) += arm_pmu.o arm_pmu_platform.o
 obj-$(CONFIG_ARM_PMU_ACPI) += arm_pmu_acpi.o
+obj-$(CONFIG_ARM_DSU_PMU) += arm_dsu_pmu.o
 obj-$(CONFIG_QCOM_L2_PMU)	+= qcom_l2_pmu.o
 obj-$(CONFIG_QCOM_L3_PMU) += qcom_l3_pmu.o
 obj-$(CONFIG_XGENE_PMU) += xgene_pmu.o
diff --git a/drivers/perf/arm_dsu_pmu.c b/drivers/perf/arm_dsu_pmu.c
new file mode 100644
index 0000000..20afe7f
--- /dev/null
+++ b/drivers/perf/arm_dsu_pmu.c
@@ -0,0 +1,883 @@
+/*
+ * ARM DynamIQ Shared Unit (DSU) PMU driver
+ *
+ * Copyright (C) ARM Limited, 2017.
+ *
+ * Based on ARM CCI-PMU, ARMv8 PMU-v3 drivers.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ */
+
+#define PMUNAME		"arm_dsu"
+#define DRVNAME		PMUNAME "_pmu"
+#define pr_fmt(fmt)	DRVNAME ": " fmt
+
+#include <linux/device.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/perf_event.h>
+#include <linux/platform_device.h>
+#include <linux/spinlock.h>
+
+#include <asm/arm_dsu_pmu.h>
+
+/* PMU event codes */
+#define DSU_PMU_EVT_CYCLES		0x11
+#define DSU_PMU_EVT_CHAIN		0x1e
+
+#define DSU_PMU_MAX_COMMON_EVENTS	0x40
+
+#define DSU_PMU_MAX_HW_CNTRS		32
+#define DSU_PMU_HW_COUNTER_MASK		(DSU_PMU_MAX_HW_CNTRS - 1)
+
+#define CLUSTERPMCR_E			BIT(0)
+#define CLUSTERPMCR_P			BIT(1)
+#define CLUSTERPMCR_C			BIT(2)
+#define CLUSTERPMCR_N_SHIFT		11
+#define CLUSTERPMCR_N_MASK		0x1f
+#define CLUSTERPMCR_IDCODE_SHIFT	16
+#define CLUSTERPMCR_IDCODE_MASK		0xff
+#define CLUSTERPMCR_IMP_SHIFT		24
+#define CLUSTERPMCR_IMP_MASK		0xff
+#define CLUSTERPMCR_RES_MASK		0x7e8
+#define CLUSTERPMCR_RES_VAL		0x40
+
+#define DSU_ACTIVE_CPU_MASK		0x0
+#define DSU_SUPPORTED_CPU_MASK		0x1
+
+/*
+ * We use the index of the counters as they appear in the counter
+ * bit maps in the PMU registers (e.g CLUSTERPMSELR).
+ * i.e,
+ *	counter 0	- Bit 0
+ *	counter 1	- Bit 1
+ *	...
+ *	Cycle counter	- Bit 31
+ */
+#define DSU_PMU_IDX_CYCLE_COUNTER	31
+
+/* All event counters are 32bit, with a 64bit Cycle counter */
+#define DSU_PMU_COUNTER_WIDTH(idx)	\
+	(((idx) == DSU_PMU_IDX_CYCLE_COUNTER) ? 64 : 32)
+
+#define DSU_PMU_COUNTER_MASK(idx)	\
+	GENMASK_ULL((DSU_PMU_COUNTER_WIDTH((idx)) - 1), 0)
+
+#define DSU_EXT_ATTR(_name, _func, _config)		\
+	(&((struct dev_ext_attribute[]) {				\
+		{							\
+			.attr = __ATTR(_name, 0444, _func, NULL),	\
+			.var = (void *)_config				\
+		}							\
+	})[0].attr.attr)
+
+#define DSU_EVENT_ATTR(_name, _config)		\
+	DSU_EXT_ATTR(_name, dsu_pmu_sysfs_event_show, (unsigned long)_config)
+
+#define DSU_FORMAT_ATTR(_name, _config)		\
+	DSU_EXT_ATTR(_name, dsu_pmu_sysfs_format_show, (char *)_config)
+
+#define DSU_CPUMASK_ATTR(_name, _config)	\
+	DSU_EXT_ATTR(_name, dsu_pmu_cpumask_show, (unsigned long)_config)
+
+struct dsu_hw_events {
+	DECLARE_BITMAP(used_mask, DSU_PMU_MAX_HW_CNTRS);
+	struct perf_event	*events[DSU_PMU_MAX_HW_CNTRS];
+};
+
+/*
+ * struct dsu_pmu	- DSU PMU descriptor
+ *
+ * @pmu_lock		: Protects accesses to DSU PMU register from multiple
+ *			  CPUs.
+ * @hw_events		: Holds the event counter state.
+ * @supported_cpus	: CPUs attached to the DSU.
+ * @active_cpu		: CPU to which the PMU is bound for accesses.
+ * @cpuhp_node		: Node for CPU hotplug notifier link.
+ * @num_counters	: Number of event counters implemented by the PMU,
+ *			  excluding the cycle counter.
+ * @irq			: Interrupt line for counter overflow.
+ * @cpmceid_bitmap	: Bitmap for the availability of architected common
+ *			  events (event_code < 0x40).
+ */
+struct dsu_pmu {
+	struct pmu			pmu;
+	struct device			*dev;
+	raw_spinlock_t			pmu_lock;
+	struct dsu_hw_events		hw_events;
+	cpumask_t			supported_cpus;
+	cpumask_t			active_cpu;
+	struct hlist_node		cpuhp_node;
+	u8				num_counters;
+	int				irq;
+	DECLARE_BITMAP(cpmceid_bitmap, DSU_PMU_MAX_COMMON_EVENTS);
+};
+
+static unsigned long dsu_pmu_cpuhp_state;
+
+static inline struct dsu_pmu *to_dsu_pmu(struct pmu *pmu)
+{
+	return container_of(pmu, struct dsu_pmu, pmu);
+}
+
+static ssize_t dsu_pmu_sysfs_event_show(struct device *dev,
+					struct device_attribute *attr,
+					char *buf)
+{
+	struct dev_ext_attribute *eattr = container_of(attr,
+					struct dev_ext_attribute, attr);
+	return snprintf(buf, PAGE_SIZE, "event=0x%lx\n",
+					 (unsigned long)eattr->var);
+}
+
+static ssize_t dsu_pmu_sysfs_format_show(struct device *dev,
+					 struct device_attribute *attr,
+					 char *buf)
+{
+	struct dev_ext_attribute *eattr = container_of(attr,
+					struct dev_ext_attribute, attr);
+	return snprintf(buf, PAGE_SIZE, "%s\n", (char *)eattr->var);
+}
+
+static ssize_t dsu_pmu_cpumask_show(struct device *dev,
+				    struct device_attribute *attr,
+				    char *buf)
+{
+	struct pmu *pmu = dev_get_drvdata(dev);
+	struct dsu_pmu *dsu_pmu = to_dsu_pmu(pmu);
+	struct dev_ext_attribute *eattr = container_of(attr,
+					struct dev_ext_attribute, attr);
+	unsigned long mask_id = (unsigned long)eattr->var;
+	const cpumask_t *cpumask;
+
+	switch (mask_id) {
+	case DSU_ACTIVE_CPU_MASK:
+		cpumask = &dsu_pmu->active_cpu;
+		break;
+	case DSU_SUPPORTED_CPU_MASK:
+		cpumask = &dsu_pmu->supported_cpus;
+		break;
+	default:
+		return 0;
+	}
+	return cpumap_print_to_pagebuf(true, buf, cpumask);
+}
+
+static struct attribute *dsu_pmu_format_attrs[] = {
+	DSU_FORMAT_ATTR(event, "config:0-31"),
+	NULL,
+};
+
+static const struct attribute_group dsu_pmu_format_attr_group = {
+	.name = "format",
+	.attrs = dsu_pmu_format_attrs,
+};
+
+static struct attribute *dsu_pmu_event_attrs[] = {
+	DSU_EVENT_ATTR(cycles, 0x11),
+	DSU_EVENT_ATTR(bus_acecss, 0x19),
+	DSU_EVENT_ATTR(memory_error, 0x1a),
+	DSU_EVENT_ATTR(bus_cycles, 0x1d),
+	DSU_EVENT_ATTR(l3d_cache_allocate, 0x29),
+	DSU_EVENT_ATTR(l3d_cache_refill, 0x2a),
+	DSU_EVENT_ATTR(l3d_cache, 0x2b),
+	DSU_EVENT_ATTR(l3d_cache_wb, 0x2c),
+	DSU_EVENT_ATTR(bus_access_rd, 0x60),
+	DSU_EVENT_ATTR(bus_access_wr, 0x61),
+	DSU_EVENT_ATTR(bus_access_shared, 0x62),
+	DSU_EVENT_ATTR(bus_access_not_shared, 0x63),
+	DSU_EVENT_ATTR(bus_access_normal, 0x64),
+	DSU_EVENT_ATTR(bus_access_periph, 0x65),
+	DSU_EVENT_ATTR(l3d_cache_rd, 0xa0),
+	DSU_EVENT_ATTR(l3d_cache_wr, 0xa1),
+	DSU_EVENT_ATTR(l3d_cache_refill_rd, 0xa2),
+	DSU_EVENT_ATTR(l3d_cache_refill_wr, 0xa3),
+	DSU_EVENT_ATTR(acp_access, 0x119),
+	DSU_EVENT_ATTR(acp_cycles, 0x11d),
+	DSU_EVENT_ATTR(acp_access_rd, 0x160),
+	DSU_EVENT_ATTR(acp_access_wr, 0x161),
+	DSU_EVENT_ATTR(pp_access, 0x219),
+	DSU_EVENT_ATTR(pp_cycles, 0x21d),
+	DSU_EVENT_ATTR(pp_access_rd, 0x260),
+	DSU_EVENT_ATTR(pp_access_wr, 0x261),
+	DSU_EVENT_ATTR(scu_snp_access, 0xc0),
+	DSU_EVENT_ATTR(scu_snp_evict, 0xc1),
+	DSU_EVENT_ATTR(scu_snp_access_cpu, 0xc2),
+	DSU_EVENT_ATTR(scu_pftch_cpu_access, 0x500),
+	DSU_EVENT_ATTR(scu_pftch_cpu_miss, 0x501),
+	DSU_EVENT_ATTR(scu_pftch_cpu_hit, 0x502),
+	DSU_EVENT_ATTR(scu_pftch_cpu_match, 0x503),
+	DSU_EVENT_ATTR(scu_pftch_cpu_kill, 0x504),
+	DSU_EVENT_ATTR(scu_stash_icn_access, 0x510),
+	DSU_EVENT_ATTR(scu_stash_icn_miss, 0x511),
+	DSU_EVENT_ATTR(scu_stash_icn_hit, 0x512),
+	DSU_EVENT_ATTR(scu_stash_icn_match, 0x513),
+	DSU_EVENT_ATTR(scu_stash_icn_kill, 0x514),
+	DSU_EVENT_ATTR(scu_hzd_address, 0xd0),
+	NULL,
+};
+
+static bool dsu_pmu_event_supported(struct dsu_pmu *dsu_pmu, unsigned long evt)
+{
+	/*
+	 * DSU PMU provides a bit map for events with
+	 *   id < DSU_PMU_MAX_COMMON_EVENTS.
+	 * Events above the range are reported as supported, as
+	 * tracking the support needs per-chip tables and makes
+	 * it difficult to track. If an event is not supported,
+	 * it won't be counted.
+	 */
+	if (evt >= DSU_PMU_MAX_COMMON_EVENTS)
+		return true;
+	/* The PMU driver doesn't support chain mode */
+	if (evt == DSU_PMU_EVT_CHAIN)
+		return false;
+	return test_bit(evt, dsu_pmu->cpmceid_bitmap);
+}
+
+static umode_t
+dsu_pmu_event_attr_is_visible(struct kobject *kobj, struct attribute *attr,
+				int unused)
+{
+	struct pmu *pmu = dev_get_drvdata(kobj_to_dev(kobj));
+	struct dsu_pmu *dsu_pmu = to_dsu_pmu(pmu);
+	struct dev_ext_attribute *eattr = container_of(attr,
+					struct dev_ext_attribute, attr.attr);
+	unsigned long evt = (unsigned long)eattr->var;
+
+	if (dsu_pmu_event_supported(dsu_pmu, evt))
+		return attr->mode;
+	return 0;
+}
+
+static const struct attribute_group dsu_pmu_events_attr_group = {
+	.name = "events",
+	.attrs = dsu_pmu_event_attrs,
+	.is_visible = dsu_pmu_event_attr_is_visible,
+};
+
+static struct attribute *dsu_pmu_cpumask_attrs[] = {
+	DSU_CPUMASK_ATTR(cpumask, DSU_ACTIVE_CPU_MASK),
+	DSU_CPUMASK_ATTR(supported_cpus, DSU_SUPPORTED_CPU_MASK),
+	NULL,
+};
+
+static const struct attribute_group dsu_pmu_cpumask_attr_group = {
+	.attrs = dsu_pmu_cpumask_attrs,
+};
+
+static const struct attribute_group *dsu_pmu_attr_groups[] = {
+	&dsu_pmu_cpumask_attr_group,
+	&dsu_pmu_events_attr_group,
+	&dsu_pmu_format_attr_group,
+	NULL,
+};
+
+static int dsu_pmu_get_online_cpu(struct dsu_pmu *dsu_pmu)
+{
+	return cpumask_first_and(&dsu_pmu->supported_cpus, cpu_online_mask);
+}
+
+static int dsu_pmu_get_online_cpu_any_but(struct dsu_pmu *dsu_pmu, int cpu)
+{
+	struct cpumask online_supported;
+
+	cpumask_and(&online_supported,
+			 &dsu_pmu->supported_cpus, cpu_online_mask);
+	return cpumask_any_but(&online_supported, cpu);
+}
+
+static inline bool dsu_pmu_counter_valid(struct dsu_pmu *dsu_pmu, u32 idx)
+{
+	return (idx < dsu_pmu->num_counters) ||
+	       (idx == DSU_PMU_IDX_CYCLE_COUNTER);
+}
+
+static inline u64 dsu_pmu_read_counter(struct perf_event *event)
+{
+	u64 val = 0;
+	unsigned long flags;
+	struct dsu_pmu *dsu_pmu = to_dsu_pmu(event->pmu);
+	int idx = event->hw.idx;
+
+	if (WARN_ON(!cpumask_test_cpu(smp_processor_id(),
+				 &dsu_pmu->supported_cpus)))
+		return 0;
+
+	if (!dsu_pmu_counter_valid(dsu_pmu, idx)) {
+		dev_err(event->pmu->dev,
+			"Trying reading invalid counter %d\n", idx);
+		return 0;
+	}
+
+	raw_spin_lock_irqsave(&dsu_pmu->pmu_lock, flags);
+	if (idx == DSU_PMU_IDX_CYCLE_COUNTER)
+		val = __dsu_pmu_read_pmccntr();
+	else
+		val = __dsu_pmu_read_counter(idx);
+	raw_spin_unlock_irqrestore(&dsu_pmu->pmu_lock, flags);
+
+	return val;
+}
+
+static void dsu_pmu_write_counter(struct perf_event *event, u64 val)
+{
+	unsigned long flags;
+	struct dsu_pmu *dsu_pmu = to_dsu_pmu(event->pmu);
+	int idx = event->hw.idx;
+
+	if (WARN_ON(!cpumask_test_cpu(smp_processor_id(),
+			 &dsu_pmu->supported_cpus)))
+		return;
+
+	if (!dsu_pmu_counter_valid(dsu_pmu, idx)) {
+		dev_err(event->pmu->dev,
+			"writing to invalid counter %d\n", idx);
+		return;
+	}
+
+	val &= DSU_PMU_COUNTER_MASK(idx);
+	raw_spin_lock_irqsave(&dsu_pmu->pmu_lock, flags);
+	if (idx == DSU_PMU_IDX_CYCLE_COUNTER)
+		__dsu_pmu_write_pmccntr(val);
+	else
+		__dsu_pmu_write_counter(idx, val);
+	raw_spin_unlock_irqrestore(&dsu_pmu->pmu_lock, flags);
+}
+
+static int dsu_pmu_get_event_idx(struct dsu_hw_events *hw_events,
+				 struct perf_event *event)
+{
+	int idx;
+	unsigned long evtype = event->attr.config;
+	struct dsu_pmu *dsu_pmu = to_dsu_pmu(event->pmu);
+	unsigned long *used_mask = hw_events->used_mask;
+
+	if (evtype == DSU_PMU_EVT_CYCLES) {
+		if (test_and_set_bit(DSU_PMU_IDX_CYCLE_COUNTER, used_mask))
+			return -EAGAIN;
+		return DSU_PMU_IDX_CYCLE_COUNTER;
+	}
+
+	idx = find_next_zero_bit(used_mask, dsu_pmu->num_counters, 0);
+	if (idx >= dsu_pmu->num_counters)
+		return -EAGAIN;
+	set_bit(idx, hw_events->used_mask);
+	return idx;
+}
+
+static void dsu_pmu_enable_counter(struct dsu_pmu *dsu_pmu, int idx)
+{
+	unsigned long flags;
+
+	raw_spin_lock_irqsave(&dsu_pmu->pmu_lock, flags);
+	__dsu_pmu_counter_interrupt_enable(idx);
+	__dsu_pmu_enable_counter(idx);
+	raw_spin_unlock_irqrestore(&dsu_pmu->pmu_lock, flags);
+}
+
+static void dsu_pmu_disable_counter(struct dsu_pmu *dsu_pmu, int idx)
+{
+	unsigned long flags;
+
+	raw_spin_lock_irqsave(&dsu_pmu->pmu_lock, flags);
+	__dsu_pmu_disable_counter(idx);
+	__dsu_pmu_counter_interrupt_disable(idx);
+	raw_spin_unlock_irqrestore(&dsu_pmu->pmu_lock, flags);
+}
+
+static inline void dsu_pmu_set_event(struct dsu_pmu *dsu_pmu,
+					struct perf_event *event)
+{
+	int idx = event->hw.idx;
+	unsigned long flags;
+
+	if (!dsu_pmu_counter_valid(dsu_pmu, idx)) {
+		dev_err(event->pmu->dev,
+			"Trying to set invalid counter %d\n", idx);
+		return;
+	}
+
+	raw_spin_lock_irqsave(&dsu_pmu->pmu_lock, flags);
+	__dsu_pmu_set_event(idx, event->hw.config_base);
+	raw_spin_unlock_irqrestore(&dsu_pmu->pmu_lock, flags);
+}
+
+static void dsu_pmu_event_update(struct perf_event *event)
+{
+	struct hw_perf_event *hwc = &event->hw;
+	u64 delta, prev_count, new_count;
+
+	do {
+		/* We may also be called from the irq handler */
+		prev_count = local64_read(&hwc->prev_count);
+		new_count = dsu_pmu_read_counter(event);
+	} while (local64_cmpxchg(&hwc->prev_count, prev_count, new_count) !=
+			prev_count);
+	delta = (new_count - prev_count) & DSU_PMU_COUNTER_MASK(hwc->idx);
+	local64_add(delta, &event->count);
+}
+
+static void dsu_pmu_read(struct perf_event *event)
+{
+	dsu_pmu_event_update(event);
+}
+
+static inline u32 dsu_pmu_get_status(void)
+{
+	return __dsu_pmu_get_pmovsclr();
+}
+
+/**
+ * dsu_pmu_set_event_period: Set the period for the counter.
+ *
+ * All DSU PMU event counters, except the cycle counter are 32bit
+ * counters. To handle cases of extreme interrupt latency, we program
+ * the counter with half of the max count for the counters.
+ */
+static void dsu_pmu_set_event_period(struct perf_event *event)
+{
+	int idx = event->hw.idx;
+	u64 val = DSU_PMU_COUNTER_MASK(idx) >> 1;
+
+	local64_set(&event->hw.prev_count, val);
+	dsu_pmu_write_counter(event, val);
+}
+
+static irqreturn_t dsu_pmu_handle_irq(int irq_num, void *dev)
+{
+	int i;
+	bool handled = false;
+	struct dsu_pmu *dsu_pmu = dev;
+	struct dsu_hw_events *hw_events = &dsu_pmu->hw_events;
+	unsigned long overflow, workset;
+
+	overflow = (unsigned long)dsu_pmu_get_status();
+	bitmap_and(&workset, &overflow, hw_events->used_mask,
+		   DSU_PMU_MAX_HW_CNTRS);
+
+	if (!workset)
+		return IRQ_NONE;
+
+	for_each_set_bit(i, &workset, DSU_PMU_MAX_HW_CNTRS) {
+		struct perf_event *event = hw_events->events[i];
+
+		if (WARN_ON(!event))
+			continue;
+		dsu_pmu_event_update(event);
+		dsu_pmu_set_event_period(event);
+
+		handled = true;
+	}
+
+	return IRQ_RETVAL(handled);
+}
+
+static void dsu_pmu_start(struct perf_event *event, int pmu_flags)
+{
+	struct dsu_pmu *dsu_pmu = to_dsu_pmu(event->pmu);
+
+	/* We always reprogram the counter */
+	if (pmu_flags & PERF_EF_RELOAD)
+		WARN_ON(!(event->hw.state & PERF_HES_UPTODATE));
+	dsu_pmu_set_event_period(event);
+	if (event->hw.idx != DSU_PMU_IDX_CYCLE_COUNTER)
+		dsu_pmu_set_event(dsu_pmu, event);
+	event->hw.state = 0;
+	dsu_pmu_enable_counter(dsu_pmu, event->hw.idx);
+}
+
+static void dsu_pmu_stop(struct perf_event *event, int pmu_flags)
+{
+	struct dsu_pmu *dsu_pmu = to_dsu_pmu(event->pmu);
+
+	if (event->hw.state & PERF_HES_STOPPED)
+		return;
+	dsu_pmu_disable_counter(dsu_pmu, event->hw.idx);
+	dsu_pmu_event_update(event);
+	event->hw.state |= PERF_HES_STOPPED | PERF_HES_UPTODATE;
+}
+
+static int dsu_pmu_add(struct perf_event *event, int flags)
+{
+	struct dsu_pmu *dsu_pmu = to_dsu_pmu(event->pmu);
+	struct dsu_hw_events *hw_events = &dsu_pmu->hw_events;
+	struct hw_perf_event *hwc = &event->hw;
+	int idx;
+
+	if (!cpumask_test_cpu(smp_processor_id(), &dsu_pmu->supported_cpus))
+		return -ENOENT;
+
+	idx = dsu_pmu_get_event_idx(hw_events, event);
+	if (idx < 0)
+		return idx;
+
+	hwc->idx = idx;
+	hw_events->events[idx] = event;
+	hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE;
+
+	if (flags & PERF_EF_START)
+		dsu_pmu_start(event, PERF_EF_RELOAD);
+
+	perf_event_update_userpage(event);
+	return 0;
+}
+
+static void dsu_pmu_del(struct perf_event *event, int flags)
+{
+	struct dsu_pmu *dsu_pmu = to_dsu_pmu(event->pmu);
+	struct dsu_hw_events *hw_events = &dsu_pmu->hw_events;
+	struct hw_perf_event *hwc = &event->hw;
+	int idx = hwc->idx;
+
+	dsu_pmu_stop(event, PERF_EF_UPDATE);
+	hw_events->events[idx] = NULL;
+	clear_bit(idx, hw_events->used_mask);
+	perf_event_update_userpage(event);
+}
+
+static void dsu_pmu_enable(struct pmu *pmu)
+{
+	u32 pmcr;
+	unsigned long flags;
+	struct dsu_pmu *dsu_pmu = to_dsu_pmu(pmu);
+	int enabled = bitmap_weight(dsu_pmu->hw_events.used_mask,
+				    DSU_PMU_MAX_HW_CNTRS);
+
+	if (!enabled)
+		return;
+
+	raw_spin_lock_irqsave(&dsu_pmu->pmu_lock, flags);
+	pmcr = __dsu_pmu_read_pmcr();
+	pmcr |= CLUSTERPMCR_E;
+	__dsu_pmu_write_pmcr(pmcr);
+	raw_spin_unlock_irqrestore(&dsu_pmu->pmu_lock, flags);
+}
+
+static void dsu_pmu_disable(struct pmu *pmu)
+{
+	u32 pmcr;
+	unsigned long flags;
+	struct dsu_pmu *dsu_pmu = to_dsu_pmu(pmu);
+
+	raw_spin_lock_irqsave(&dsu_pmu->pmu_lock, flags);
+	pmcr = __dsu_pmu_read_pmcr();
+	pmcr &= ~CLUSTERPMCR_E;
+	__dsu_pmu_write_pmcr(pmcr);
+	raw_spin_unlock_irqrestore(&dsu_pmu->pmu_lock, flags);
+}
+
+static int dsu_pmu_validate_event(struct pmu *pmu,
+				  struct dsu_hw_events *hw_events,
+				  struct perf_event *event)
+{
+	if (is_software_event(event))
+		return 1;
+	/* Reject groups spanning multiple HW PMUs. */
+	if (event->pmu != pmu)
+		return 0;
+	if (event->state < PERF_EVENT_STATE_OFF)
+		return 1;
+	if (event->state == PERF_EVENT_STATE_OFF && !event->attr.enable_on_exec)
+		return 1;
+	return dsu_pmu_get_event_idx(hw_events, event) >= 0;
+}
+
+/*
+ * Make sure the group of events can be scheduled at once
+ * on the PMU.
+ */
+static int dsu_pmu_validate_group(struct perf_event *event)
+{
+	struct perf_event *sibling, *leader = event->group_leader;
+	struct dsu_hw_events fake_hw;
+
+	if (event->group_leader == event)
+		return 0;
+
+	memset(fake_hw.used_mask, 0, sizeof(fake_hw.used_mask));
+	if (!dsu_pmu_validate_event(event->pmu, &fake_hw, leader))
+		return -EINVAL;
+	list_for_each_entry(sibling, &leader->sibling_list, group_entry) {
+		if (!dsu_pmu_validate_event(event->pmu, &fake_hw, sibling))
+			return -EINVAL;
+	}
+	if (!dsu_pmu_validate_event(event->pmu, &fake_hw, event))
+		return -EINVAL;
+	return 0;
+}
+
+static int dsu_pmu_event_init(struct perf_event *event)
+{
+	struct dsu_pmu *dsu_pmu = to_dsu_pmu(event->pmu);
+
+	if (event->attr.type != event->pmu->type)
+		return -ENOENT;
+
+	if (!dsu_pmu_event_supported(dsu_pmu, event->attr.config))
+		return -EOPNOTSUPP;
+
+	/* We cannot support task bound events */
+	if (event->cpu < 0) {
+		dev_dbg(dsu_pmu->pmu.dev, "Can't support per-task counters\n");
+		return -EINVAL;
+	}
+
+	/* We don't support sampling */
+	if (is_sampling_event(event)) {
+		dev_dbg(dsu_pmu->pmu.dev, "Can't support sampling events\n");
+		return -EOPNOTSUPP;
+	}
+
+	if (has_branch_stack(event) ||
+	    event->attr.exclude_user ||
+	    event->attr.exclude_kernel ||
+	    event->attr.exclude_hv ||
+	    event->attr.exclude_idle ||
+	    event->attr.exclude_host ||
+	    event->attr.exclude_guest) {
+		dev_dbg(dsu_pmu->pmu.dev, "Can't support filtering\n");
+		return -EINVAL;
+	}
+
+	if (dsu_pmu_validate_group(event))
+		return -EINVAL;
+
+	event->cpu = cpumask_first(&dsu_pmu->active_cpu);
+	if (event->cpu >= nr_cpu_ids)
+		return -EINVAL;
+
+	event->hw.config_base = event->attr.config;
+	return 0;
+}
+
+static struct dsu_pmu *dsu_pmu_alloc(struct platform_device *pdev)
+{
+	struct dsu_pmu *dsu_pmu;
+
+	dsu_pmu = devm_kzalloc(&pdev->dev, sizeof(*dsu_pmu), GFP_KERNEL);
+	if (!dsu_pmu)
+		return ERR_PTR(-ENOMEM);
+
+	raw_spin_lock_init(&dsu_pmu->pmu_lock);
+	return dsu_pmu;
+}
+
+/**
+ * dsu_pmu_dt_get_cpus: Get the list of CPUs in the cluster.
+ */
+static int dsu_pmu_dt_get_cpus(struct device_node *dev, cpumask_t *mask)
+{
+	int i = 0, n, cpu;
+	struct device_node *cpu_node;
+
+	n = of_count_phandle_with_args(dev, "cpus", NULL);
+	if (n <= 0)
+		return -ENODEV;
+	for (; i < n; i++) {
+		cpu_node = of_parse_phandle(dev, "cpus", i);
+		if (!cpu_node)
+			break;
+		cpu = of_cpu_node_to_id(cpu_node);
+		of_node_put(cpu_node);
+		if (cpu < 0)
+			break;
+		cpumask_set_cpu(cpu, mask);
+	}
+	if (i != n)
+		return -EINVAL;
+	return 0;
+}
+
+/*
+ * dsu_pmu_probe_pmu: Probe the PMU details on a CPU in the cluster.
+ */
+static void dsu_pmu_probe_pmu(void *data)
+{
+	struct dsu_pmu *dsu_pmu = data;
+	u64 cpmcr;
+	u32 cpmceid[2];
+
+	if (WARN_ON(!cpumask_test_cpu(smp_processor_id(),
+		&dsu_pmu->supported_cpus)))
+		return;
+	cpmcr = __dsu_pmu_read_pmcr();
+	dsu_pmu->num_counters = ((cpmcr >> CLUSTERPMCR_N_SHIFT) &
+					CLUSTERPMCR_N_MASK);
+	if (!dsu_pmu->num_counters)
+		return;
+	cpmceid[0] = __dsu_pmu_read_pmceid(0);
+	cpmceid[1] = __dsu_pmu_read_pmceid(1);
+	bitmap_from_u32array(dsu_pmu->cpmceid_bitmap,
+				DSU_PMU_MAX_COMMON_EVENTS,
+				cpmceid,
+				ARRAY_SIZE(cpmceid));
+}
+
+static int dsu_pmu_device_probe(struct platform_device *pdev)
+{
+	int irq, rc, cpu;
+	struct dsu_pmu *dsu_pmu;
+	char *name;
+	static atomic_t pmu_idx = ATOMIC_INIT(-1);
+
+	dsu_pmu = dsu_pmu_alloc(pdev);
+	if (IS_ERR(dsu_pmu))
+		return PTR_ERR(dsu_pmu);
+
+	rc = dsu_pmu_dt_get_cpus(pdev->dev.of_node, &dsu_pmu->supported_cpus);
+	if (rc) {
+		dev_warn(&pdev->dev, "Failed to parse the CPUs\n");
+		return rc;
+	}
+
+	rc = smp_call_function_any(&dsu_pmu->supported_cpus,
+					dsu_pmu_probe_pmu,
+					dsu_pmu, 1);
+	if (rc)
+		return rc;
+	if (!dsu_pmu->num_counters)
+		return -ENODEV;
+	irq = platform_get_irq(pdev, 0);
+	if (irq < 0) {
+		dev_warn(&pdev->dev, "Failed to find IRQ\n");
+		return -EINVAL;
+	}
+
+	name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s_%d",
+				PMUNAME, atomic_inc_return(&pmu_idx));
+	rc = devm_request_irq(&pdev->dev, irq, dsu_pmu_handle_irq,
+					0, name, dsu_pmu);
+	if (rc) {
+		dev_warn(&pdev->dev, "Failed to request IRQ %d\n", irq);
+		return rc;
+	}
+
+	/*
+	 * Find one CPU in the DSU to handle the IRQs.
+	 * It is highly unlikely that we would fail
+	 * to find one, given the probing has succeeded.
+	 */
+	cpu = dsu_pmu_get_online_cpu(dsu_pmu);
+	if (cpu >= nr_cpu_ids)
+		return -ENODEV;
+	cpumask_set_cpu(cpu, &dsu_pmu->active_cpu);
+	rc = irq_set_affinity_hint(irq, &dsu_pmu->active_cpu);
+	if (rc) {
+		dev_warn(&pdev->dev, "Failed to force IRQ affinity for %d\n",
+					 irq);
+		return rc;
+	}
+
+	platform_set_drvdata(pdev, dsu_pmu);
+	rc = cpuhp_state_add_instance(dsu_pmu_cpuhp_state,
+						&dsu_pmu->cpuhp_node);
+	if (rc)
+		goto irq_cleanup;
+
+	dsu_pmu->irq = irq;
+	dsu_pmu->pmu = (struct pmu) {
+		.task_ctx_nr	= perf_invalid_context,
+
+		.pmu_enable	= dsu_pmu_enable,
+		.pmu_disable	= dsu_pmu_disable,
+		.event_init	= dsu_pmu_event_init,
+		.add		= dsu_pmu_add,
+		.del		= dsu_pmu_del,
+		.start		= dsu_pmu_start,
+		.stop		= dsu_pmu_stop,
+		.read		= dsu_pmu_read,
+
+		.attr_groups	= dsu_pmu_attr_groups,
+	};
+
+	rc = perf_pmu_register(&dsu_pmu->pmu, name, -1);
+	if (rc)
+		goto cpuhp_cleanup;
+
+	dev_info(&pdev->dev, "Registered %s with %d event counters",
+				name, dsu_pmu->num_counters);
+	return 0;
+
+cpuhp_cleanup:
+	cpuhp_state_remove_instance(dsu_pmu_cpuhp_state, &dsu_pmu->cpuhp_node);
+irq_cleanup:
+	irq_set_affinity_hint(dsu_pmu->irq, NULL);
+	return rc;
+}
+
+static int dsu_pmu_device_remove(struct platform_device *pdev)
+{
+	struct dsu_pmu *dsu_pmu = platform_get_drvdata(pdev);
+
+	perf_pmu_unregister(&dsu_pmu->pmu);
+	cpuhp_state_remove_instance(dsu_pmu_cpuhp_state, &dsu_pmu->cpuhp_node);
+	irq_set_affinity_hint(dsu_pmu->irq, NULL);
+
+	return 0;
+}
+
+static const struct of_device_id dsu_pmu_of_match[] = {
+	{ .compatible = "arm,dsu-pmu", },
+	{},
+};
+
+static struct platform_driver dsu_pmu_driver = {
+	.driver = {
+		.name	= DRVNAME,
+		.of_match_table = of_match_ptr(dsu_pmu_of_match),
+	},
+	.probe = dsu_pmu_device_probe,
+	.remove = dsu_pmu_device_remove,
+};
+
+static int dsu_pmu_cpu_teardown(unsigned int cpu, struct hlist_node *node)
+{
+	int dst;
+	struct dsu_pmu *dsu_pmu = hlist_entry_safe(node, struct dsu_pmu,
+						   cpuhp_node);
+
+	if (!cpumask_test_and_clear_cpu(cpu, &dsu_pmu->active_cpu))
+		return 0;
+
+	dst = dsu_pmu_get_online_cpu_any_but(dsu_pmu, cpu);
+	if (dst < nr_cpu_ids) {
+		cpumask_set_cpu(dst, &dsu_pmu->active_cpu);
+		perf_pmu_migrate_context(&dsu_pmu->pmu, cpu, dst);
+		irq_set_affinity_hint(dsu_pmu->irq, &dsu_pmu->active_cpu);
+	}
+
+	return 0;
+}
+
+static int __init dsu_pmu_init(void)
+{
+	int ret;
+
+	ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN,
+					DRVNAME,
+					NULL,
+					dsu_pmu_cpu_teardown);
+	if (ret < 0)
+		return ret;
+	dsu_pmu_cpuhp_state = ret;
+	return platform_driver_register(&dsu_pmu_driver);
+}
+
+static void __exit dsu_pmu_exit(void)
+{
+	platform_driver_unregister(&dsu_pmu_driver);
+	cpuhp_remove_multi_state(dsu_pmu_cpuhp_state);
+}
+
+module_init(dsu_pmu_init);
+module_exit(dsu_pmu_exit);
+
+MODULE_DEVICE_TABLE(of, dsu_pmu_of_match);
+MODULE_DESCRIPTION("Perf driver for ARM DynamIQ Shared Unit");
+MODULE_AUTHOR("Suzuki K Poulose <suzuki.poulose@arm.com>");
+MODULE_LICENSE("GPL v2");
-- 
2.7.5

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 6/6] perf: ARM DynamIQ Shared Unit PMU support
  2017-08-08 11:37 ` [PATCH v5 6/6] perf: ARM DynamIQ Shared Unit PMU support Suzuki K Poulose
@ 2017-08-16 14:10   ` Mark Rutland
  2017-08-17 14:52     ` Suzuki K Poulose
  0 siblings, 1 reply; 13+ messages in thread
From: Mark Rutland @ 2017-08-16 14:10 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: linux-arm-kernel, will.deacon, marc.zyngier, devicetree,
	linux-kernel, robh, frowand.list, mathieu.poirier, peterz

On Tue, Aug 08, 2017 at 12:37:26PM +0100, Suzuki K Poulose wrote:
> +/*
> + * struct dsu_pmu	- DSU PMU descriptor
> + *
> + * @pmu_lock		: Protects accesses to DSU PMU register from multiple
> + *			  CPUs.
> + * @hw_events		: Holds the event counter state.
> + * @supported_cpus	: CPUs attached to the DSU.
> + * @active_cpu		: CPU to which the PMU is bound for accesses.
> + * @cpuhp_node		: Node for CPU hotplug notifier link.
> + * @num_counters	: Number of event counters implemented by the PMU,
> + *			  excluding the cycle counter.
> + * @irq			: Interrupt line for counter overflow.
> + * @cpmceid_bitmap	: Bitmap for the availability of architected common
> + *			  events (event_code < 0x40).
> + */
> +struct dsu_pmu {
> +	struct pmu			pmu;
> +	struct device			*dev;
> +	raw_spinlock_t			pmu_lock;

I'm in two minds about pmu_lock. The core has to take the ctx lock for
most (all?) operations, and we only allow events to be opened on a
particular CPU, so there shouldn't be concurrent accesses from other
CPUs.

We do need to disable interrupts in order to serialise against the IRQ
handler, and disabling IRQs doesn't make it clear that we're protecting
a particular resource.

Could we update the description to make it clear that the pmu_lock is
used to serialise against the IRQ handler? It also happens to protect
cross-cpu accesses, but those would be a bug.

Other PMU drivers have locks which may not be necessary; I can try to
clean those up later.

[...]

> +static struct attribute *dsu_pmu_event_attrs[] = {
> +	DSU_EVENT_ATTR(cycles, 0x11),
> +	DSU_EVENT_ATTR(bus_acecss, 0x19),
> +	DSU_EVENT_ATTR(memory_error, 0x1a),
> +	DSU_EVENT_ATTR(bus_cycles, 0x1d),
> +	DSU_EVENT_ATTR(l3d_cache_allocate, 0x29),
> +	DSU_EVENT_ATTR(l3d_cache_refill, 0x2a),
> +	DSU_EVENT_ATTR(l3d_cache, 0x2b),
> +	DSU_EVENT_ATTR(l3d_cache_wb, 0x2c),

MAX_COMMON_EVENTS seems to be 0x40, so are we just assuming the below
are implemented?

If so, why bother exposing them at all? We can't know that they're going
to work...

> +	DSU_EVENT_ATTR(bus_access_rd, 0x60),
> +	DSU_EVENT_ATTR(bus_access_wr, 0x61),
> +	DSU_EVENT_ATTR(bus_access_shared, 0x62),
> +	DSU_EVENT_ATTR(bus_access_not_shared, 0x63),
> +	DSU_EVENT_ATTR(bus_access_normal, 0x64),
> +	DSU_EVENT_ATTR(bus_access_periph, 0x65),
> +	DSU_EVENT_ATTR(l3d_cache_rd, 0xa0),
> +	DSU_EVENT_ATTR(l3d_cache_wr, 0xa1),
> +	DSU_EVENT_ATTR(l3d_cache_refill_rd, 0xa2),
> +	DSU_EVENT_ATTR(l3d_cache_refill_wr, 0xa3),
> +	DSU_EVENT_ATTR(acp_access, 0x119),
> +	DSU_EVENT_ATTR(acp_cycles, 0x11d),
> +	DSU_EVENT_ATTR(acp_access_rd, 0x160),
> +	DSU_EVENT_ATTR(acp_access_wr, 0x161),
> +	DSU_EVENT_ATTR(pp_access, 0x219),
> +	DSU_EVENT_ATTR(pp_cycles, 0x21d),
> +	DSU_EVENT_ATTR(pp_access_rd, 0x260),
> +	DSU_EVENT_ATTR(pp_access_wr, 0x261),
> +	DSU_EVENT_ATTR(scu_snp_access, 0xc0),
> +	DSU_EVENT_ATTR(scu_snp_evict, 0xc1),
> +	DSU_EVENT_ATTR(scu_snp_access_cpu, 0xc2),
> +	DSU_EVENT_ATTR(scu_pftch_cpu_access, 0x500),
> +	DSU_EVENT_ATTR(scu_pftch_cpu_miss, 0x501),
> +	DSU_EVENT_ATTR(scu_pftch_cpu_hit, 0x502),
> +	DSU_EVENT_ATTR(scu_pftch_cpu_match, 0x503),
> +	DSU_EVENT_ATTR(scu_pftch_cpu_kill, 0x504),
> +	DSU_EVENT_ATTR(scu_stash_icn_access, 0x510),
> +	DSU_EVENT_ATTR(scu_stash_icn_miss, 0x511),
> +	DSU_EVENT_ATTR(scu_stash_icn_hit, 0x512),
> +	DSU_EVENT_ATTR(scu_stash_icn_match, 0x513),
> +	DSU_EVENT_ATTR(scu_stash_icn_kill, 0x514),
> +	DSU_EVENT_ATTR(scu_hzd_address, 0xd0),
> +	NULL,
> +};
> +
> +static bool dsu_pmu_event_supported(struct dsu_pmu *dsu_pmu, unsigned long evt)
> +{
> +	/*
> +	 * DSU PMU provides a bit map for events with
> +	 *   id < DSU_PMU_MAX_COMMON_EVENTS.
> +	 * Events above the range are reported as supported, as
> +	 * tracking the support needs per-chip tables and makes
> +	 * it difficult to track. If an event is not supported,
> +	 * it won't be counted.
> +	 */
> +	if (evt >= DSU_PMU_MAX_COMMON_EVENTS)
> +		return true;
> +	/* The PMU driver doesn't support chain mode */
> +	if (evt == DSU_PMU_EVT_CHAIN)
> +		return false;
> +	return test_bit(evt, dsu_pmu->cpmceid_bitmap);
> +}

I don't think we need to use this for event_init() (more on that below),
so I think this can be simplified.

[...]

> +static struct attribute *dsu_pmu_cpumask_attrs[] = {
> +	DSU_CPUMASK_ATTR(cpumask, DSU_ACTIVE_CPU_MASK),
> +	DSU_CPUMASK_ATTR(supported_cpus, DSU_SUPPORTED_CPU_MASK),
> +	NULL,
> +};

Normally we only expose one mask.

Why do we need the supported cpu mask? What is the intended use-case?

[...]

> +static inline u64 dsu_pmu_read_counter(struct perf_event *event)
> +{
> +	u64 val = 0;
> +	unsigned long flags;
> +	struct dsu_pmu *dsu_pmu = to_dsu_pmu(event->pmu);
> +	int idx = event->hw.idx;
> +
> +	if (WARN_ON(!cpumask_test_cpu(smp_processor_id(),
> +				 &dsu_pmu->supported_cpus)))
> +		return 0;

It would be better to check that this is the active CPU, since reading
on another supported CPU would still be wrong.

Likewise for the check in dsu_pmu_write_counter().

[...]

> +static void dsu_pmu_enable_counter(struct dsu_pmu *dsu_pmu, int idx)
> +{
> +	unsigned long flags;
> +
> +	raw_spin_lock_irqsave(&dsu_pmu->pmu_lock, flags);
> +	__dsu_pmu_counter_interrupt_enable(idx);
> +	__dsu_pmu_enable_counter(idx);
> +	raw_spin_unlock_irqrestore(&dsu_pmu->pmu_lock, flags);
> +}
> +
> +static void dsu_pmu_disable_counter(struct dsu_pmu *dsu_pmu, int idx)
> +{
> +	unsigned long flags;
> +
> +	raw_spin_lock_irqsave(&dsu_pmu->pmu_lock, flags);
> +	__dsu_pmu_disable_counter(idx);
> +	__dsu_pmu_counter_interrupt_disable(idx);
> +	raw_spin_unlock_irqrestore(&dsu_pmu->pmu_lock, flags);
> +}

Do we need the disable/enable IRQs across these? AFAICT these writing to
set/clr registers, so there's no RMW race to worry about so long as the
rest of the code does likewise.

[...]

> +static inline u32 dsu_pmu_get_status(void)
> +{
> +	return __dsu_pmu_get_pmovsclr();
> +}

This could do with a better name. It's not clear what status it's meant
to get.

[...]

> +static irqreturn_t dsu_pmu_handle_irq(int irq_num, void *dev)
> +{
> +	int i;
> +	bool handled = false;
> +	struct dsu_pmu *dsu_pmu = dev;
> +	struct dsu_hw_events *hw_events = &dsu_pmu->hw_events;
> +	unsigned long overflow, workset;
> +
> +	overflow = (unsigned long)dsu_pmu_get_status();

This cast isn't necessary.

> +	bitmap_and(&workset, &overflow, hw_events->used_mask,
> +		   DSU_PMU_MAX_HW_CNTRS);
> +
> +	if (!workset)
> +		return IRQ_NONE;
> +
> +	for_each_set_bit(i, &workset, DSU_PMU_MAX_HW_CNTRS) {
> +		struct perf_event *event = hw_events->events[i];
> +
> +		if (WARN_ON(!event))
> +			continue;

Unless we explicitly clear an event's overflow flag when we remove it,
this could happen in normal operation. This should probably have a
_ONCE(), if we think we need it.

[...]

> +static int dsu_pmu_add(struct perf_event *event, int flags)
> +{
> +	struct dsu_pmu *dsu_pmu = to_dsu_pmu(event->pmu);
> +	struct dsu_hw_events *hw_events = &dsu_pmu->hw_events;
> +	struct hw_perf_event *hwc = &event->hw;
> +	int idx;
> +
> +	if (!cpumask_test_cpu(smp_processor_id(), &dsu_pmu->supported_cpus))
> +		return -ENOENT;

This shouldn't ever happen, and we can check against the active cpumask,
with a WARN_ON_ONCE(). We have to do this for CPU PMUs since they
support events which can migrate with tasks, but that's not the case
here.

[...]

> +static int dsu_pmu_validate_event(struct pmu *pmu,
> +				  struct dsu_hw_events *hw_events,
> +				  struct perf_event *event)
> +{
> +	if (is_software_event(event))
> +		return 1;
> +	/* Reject groups spanning multiple HW PMUs. */
> +	if (event->pmu != pmu)
> +		return 0;
> +	if (event->state < PERF_EVENT_STATE_OFF)
> +		return 1;
> +	if (event->state == PERF_EVENT_STATE_OFF && !event->attr.enable_on_exec)
> +		return 1;

This doesn't make sense for a !CPU pmu, and I think you can nuke the
prior test, too, which I think doesn't make sense.

> +	return dsu_pmu_get_event_idx(hw_events, event) >= 0;
> +}

This whole function could be bool.

> +/*
> + * Make sure the group of events can be scheduled at once
> + * on the PMU.
> + */
> +static int dsu_pmu_validate_group(struct perf_event *event)
> +{
> +	struct perf_event *sibling, *leader = event->group_leader;
> +	struct dsu_hw_events fake_hw;
> +
> +	if (event->group_leader == event)
> +		return 0;
> +
> +	memset(fake_hw.used_mask, 0, sizeof(fake_hw.used_mask));
> +	if (!dsu_pmu_validate_event(event->pmu, &fake_hw, leader))
> +		return -EINVAL;
> +	list_for_each_entry(sibling, &leader->sibling_list, group_entry) {
> +		if (!dsu_pmu_validate_event(event->pmu, &fake_hw, sibling))
> +			return -EINVAL;
> +	}
> +	if (!dsu_pmu_validate_event(event->pmu, &fake_hw, event))
> +		return -EINVAL;
> +	return 0;
> +}

The return value has the opposite polarity to dsu_pmu_validate_event(),
which is rather confusing.

[...]

> +static int dsu_pmu_event_init(struct perf_event *event)
> +{
> +	struct dsu_pmu *dsu_pmu = to_dsu_pmu(event->pmu);
> +
> +	if (event->attr.type != event->pmu->type)
> +		return -ENOENT;
> +
> +	if (!dsu_pmu_event_supported(dsu_pmu, event->attr.config))
> +		return -EOPNOTSUPP;

I can see why we track this for the sysfs nodes, but why do we bother
enforcing it for event_init()? Especially given we only enforce it for a
subset of values...

We don't do that for other PMUs, and assuming that this doesn't break
the HW, I'd rather we allowed users to request raw values, even if we
think the HW won't count them.

> +
> +	/* We cannot support task bound events */
> +	if (event->cpu < 0) {
> +		dev_dbg(dsu_pmu->pmu.dev, "Can't support per-task counters\n");
> +		return -EINVAL;
> +	}

We should also check (event->attach_state & PERF_ATTACH_TASK), since you
could have a per-task event with a CPU filter, and that doesn't make
sense either.

[...]

> +/**
> + * dsu_pmu_dt_get_cpus: Get the list of CPUs in the cluster.
> + */
> +static int dsu_pmu_dt_get_cpus(struct device_node *dev, cpumask_t *mask)
> +{
> +	int i = 0, n, cpu;
> +	struct device_node *cpu_node;
> +
> +	n = of_count_phandle_with_args(dev, "cpus", NULL);
> +	if (n <= 0)
> +		return -ENODEV;
> +	for (; i < n; i++) {
> +		cpu_node = of_parse_phandle(dev, "cpus", i);
> +		if (!cpu_node)
> +			break;
> +		cpu = of_cpu_node_to_id(cpu_node);
> +		of_node_put(cpu_node);
> +		if (cpu < 0)
> +			break;

I believe this can happen if the kernel's nr_cpus were capped below the
number of CPUs in the system. So we need to iterate all entries of the
cpus list regardless.

> +		cpumask_set_cpu(cpu, mask);
> +	}
> +	if (i != n)
> +		return -EINVAL;
> +	return 0;
> +}

[...]

> +
> +/*
> + * dsu_pmu_probe_pmu: Probe the PMU details on a CPU in the cluster.
> + */
> +static void dsu_pmu_probe_pmu(void *data)
> +{
> +	struct dsu_pmu *dsu_pmu = data;
> +	u64 cpmcr;
> +	u32 cpmceid[2];
> +
> +	if (WARN_ON(!cpumask_test_cpu(smp_processor_id(),
> +		&dsu_pmu->supported_cpus)))
> +		return;

I can't see how this can happen unless core kernel code is critically
broken, given that the only caller uses smp_call_function_any() with the
supported_cpus mask.

Do we need this check?

> +	cpmcr = __dsu_pmu_read_pmcr();
> +	dsu_pmu->num_counters = ((cpmcr >> CLUSTERPMCR_N_SHIFT) &
> +					CLUSTERPMCR_N_MASK);
> +	if (!dsu_pmu->num_counters)
> +		return;

Is that valid? What range of values makes sense?

[...]

> +	/*
> +	 * Find one CPU in the DSU to handle the IRQs.
> +	 * It is highly unlikely that we would fail
> +	 * to find one, given the probing has succeeded.
> +	 */
> +	cpu = dsu_pmu_get_online_cpu(dsu_pmu);
> +	if (cpu >= nr_cpu_ids)
> +		return -ENODEV;
> +	cpumask_set_cpu(cpu, &dsu_pmu->active_cpu);
> +	rc = irq_set_affinity_hint(irq, &dsu_pmu->active_cpu);

Is setting the affinity hint strong enough?

Can the affinity be changed behind the back of this driver?

> +static int dsu_pmu_cpu_teardown(unsigned int cpu, struct hlist_node *node)
> +{
> +	int dst;
> +	struct dsu_pmu *dsu_pmu = hlist_entry_safe(node, struct dsu_pmu,
> +						   cpuhp_node);
> +
> +	if (!cpumask_test_and_clear_cpu(cpu, &dsu_pmu->active_cpu))
> +		return 0;
> +
> +	dst = dsu_pmu_get_online_cpu_any_but(dsu_pmu, cpu);
> +	if (dst < nr_cpu_ids) {
> +		cpumask_set_cpu(dst, &dsu_pmu->active_cpu);
> +		perf_pmu_migrate_context(&dsu_pmu->pmu, cpu, dst);
> +		irq_set_affinity_hint(dsu_pmu->irq, &dsu_pmu->active_cpu);

Likewise, is this sufficient?

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 6/6] perf: ARM DynamIQ Shared Unit PMU support
  2017-08-16 14:10   ` Mark Rutland
@ 2017-08-17 14:52     ` Suzuki K Poulose
  2017-08-17 15:57       ` Mark Rutland
  0 siblings, 1 reply; 13+ messages in thread
From: Suzuki K Poulose @ 2017-08-17 14:52 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, will.deacon, marc.zyngier, devicetree,
	linux-kernel, robh, frowand.list, mathieu.poirier, peterz

Hi Mark,

On 16/08/17 15:10, Mark Rutland wrote:
> On Tue, Aug 08, 2017 at 12:37:26PM +0100, Suzuki K Poulose wrote:
>> +/*
>> + * struct dsu_pmu	- DSU PMU descriptor
>> + *
>> + * @pmu_lock		: Protects accesses to DSU PMU register from multiple
>> + *			  CPUs.
>> + * @hw_events		: Holds the event counter state.
>> + * @supported_cpus	: CPUs attached to the DSU.
>> + * @active_cpu		: CPU to which the PMU is bound for accesses.
>> + * @cpuhp_node		: Node for CPU hotplug notifier link.
>> + * @num_counters	: Number of event counters implemented by the PMU,
>> + *			  excluding the cycle counter.
>> + * @irq			: Interrupt line for counter overflow.
>> + * @cpmceid_bitmap	: Bitmap for the availability of architected common
>> + *			  events (event_code < 0x40).
>> + */
>> +struct dsu_pmu {
>> +	struct pmu			pmu;
>> +	struct device			*dev;
>> +	raw_spinlock_t			pmu_lock;
>
> I'm in two minds about pmu_lock. The core has to take the ctx lock for
> most (all?) operations, and we only allow events to be opened on a
> particular CPU, so there shouldn't be concurrent accesses from other
> CPUs.
>
> We do need to disable interrupts in order to serialise against the IRQ
> handler, and disabling IRQs doesn't make it clear that we're protecting
> a particular resource.
>
> Could we update the description to make it clear that the pmu_lock is
> used to serialise against the IRQ handler? It also happens to protect
> cross-cpu accesses, but those would be a bug.

Sure, I can.

>
> Other PMU drivers have locks which may not be necessary; I can try to
> clean those up later.
>
> [...]
>
>> +static struct attribute *dsu_pmu_event_attrs[] = {
>> +	DSU_EVENT_ATTR(cycles, 0x11),
>> +	DSU_EVENT_ATTR(bus_acecss, 0x19),
>> +	DSU_EVENT_ATTR(memory_error, 0x1a),
>> +	DSU_EVENT_ATTR(bus_cycles, 0x1d),
>> +	DSU_EVENT_ATTR(l3d_cache_allocate, 0x29),
>> +	DSU_EVENT_ATTR(l3d_cache_refill, 0x2a),
>> +	DSU_EVENT_ATTR(l3d_cache, 0x2b),
>> +	DSU_EVENT_ATTR(l3d_cache_wb, 0x2c),
>
> MAX_COMMON_EVENTS seems to be 0x40, so are we just assuming the below
> are implemented?
>
> If so, why bother exposing them at all? We can't know that they're going
> to work...
>

Thats right. The only defending argument is that the event codes are specific
to the DynamIQ, unlike the COMMON_EVENTS which matches the ARMv8 PMU event codes.
So, someone would need to carefully find the event code for a particular event.
Having these entries would make it easier for the user to do the profiling.

Also, future revisions of the DSU could potentially expose more events. So there
wouldn't be any way to tell the user (provided there aren't any changes to the
programming model and we decide to reuse the same compatible string) what we *could*
potentially support. In short, this is not a problem at the moment and we could
do something about it as and when required.

>> +	DSU_EVENT_ATTR(bus_access_rd, 0x60),
>> +	DSU_EVENT_ATTR(bus_access_wr, 0x61),
>> +	DSU_EVENT_ATTR(bus_access_shared, 0x62),
>> +	DSU_EVENT_ATTR(bus_access_not_shared, 0x63),
>> +	DSU_EVENT_ATTR(bus_access_normal, 0x64),

...

>> +static bool dsu_pmu_event_supported(struct dsu_pmu *dsu_pmu, unsigned long evt)
>> +{
>> +	/*
>> +	 * DSU PMU provides a bit map for events with
>> +	 *   id < DSU_PMU_MAX_COMMON_EVENTS.
>> +	 * Events above the range are reported as supported, as
>> +	 * tracking the support needs per-chip tables and makes
>> +	 * it difficult to track. If an event is not supported,
>> +	 * it won't be counted.
>> +	 */
>> +	if (evt >= DSU_PMU_MAX_COMMON_EVENTS)
>> +		return true;
>> +	/* The PMU driver doesn't support chain mode */
>> +	if (evt == DSU_PMU_EVT_CHAIN)
>> +		return false;
>> +	return test_bit(evt, dsu_pmu->cpmceid_bitmap);
>> +}
>
> I don't think we need to use this for event_init() (more on that below),
> so I think this can be simplified.
>
> [...]
>
>> +static struct attribute *dsu_pmu_cpumask_attrs[] = {
>> +	DSU_CPUMASK_ATTR(cpumask, DSU_ACTIVE_CPU_MASK),
>> +	DSU_CPUMASK_ATTR(supported_cpus, DSU_SUPPORTED_CPU_MASK),
>> +	NULL,
>> +};
>
> Normally we only expose one mask.
>
> Why do we need the supported cpu mask? What is the intended use-case?
>
> [...]
>

Thats just to let the user know the CPUs bound to this PMU instance.

>> +static inline u64 dsu_pmu_read_counter(struct perf_event *event)
>> +{
>> +	u64 val = 0;
>> +	unsigned long flags;
>> +	struct dsu_pmu *dsu_pmu = to_dsu_pmu(event->pmu);
>> +	int idx = event->hw.idx;
>> +
>> +	if (WARN_ON(!cpumask_test_cpu(smp_processor_id(),
>> +				 &dsu_pmu->supported_cpus)))
>> +		return 0;
>
> It would be better to check that this is the active CPU, since reading
> on another supported CPU would still be wrong.
>
> Likewise for the check in dsu_pmu_write_counter().
>
> [...]
>

OK.

>> +static void dsu_pmu_enable_counter(struct dsu_pmu *dsu_pmu, int idx)
>> +{
>> +	unsigned long flags;
>> +
>> +	raw_spin_lock_irqsave(&dsu_pmu->pmu_lock, flags);
>> +	__dsu_pmu_counter_interrupt_enable(idx);
>> +	__dsu_pmu_enable_counter(idx);
>> +	raw_spin_unlock_irqrestore(&dsu_pmu->pmu_lock, flags);
>> +}
>> +
>> +static void dsu_pmu_disable_counter(struct dsu_pmu *dsu_pmu, int idx)
>> +{
>> +	unsigned long flags;
>> +
>> +	raw_spin_lock_irqsave(&dsu_pmu->pmu_lock, flags);
>> +	__dsu_pmu_disable_counter(idx);
>> +	__dsu_pmu_counter_interrupt_disable(idx);
>> +	raw_spin_unlock_irqrestore(&dsu_pmu->pmu_lock, flags);
>> +}
>
> Do we need the disable/enable IRQs across these? AFAICT these writing to
> set/clr registers, so there's no RMW race to worry about so long as the
> rest of the code does likewise.
>
> [...]

Ah, you're right. We don't need to do a select register operation here.

>
>> +static inline u32 dsu_pmu_get_status(void)
>> +{
>> +	return __dsu_pmu_get_pmovsclr();
>> +}
>
> This could do with a better name. It's not clear what status it's meant
> to get.
>
> [...]
>

Agreed, I was in double mind about the name, myself.

>> +static irqreturn_t dsu_pmu_handle_irq(int irq_num, void *dev)
>> +{
>> +	int i;
>> +	bool handled = false;
>> +	struct dsu_pmu *dsu_pmu = dev;
>> +	struct dsu_hw_events *hw_events = &dsu_pmu->hw_events;
>> +	unsigned long overflow, workset;
>> +
>> +	overflow = (unsigned long)dsu_pmu_get_status();
>
> This cast isn't necessary.
>
>> +	bitmap_and(&workset, &overflow, hw_events->used_mask,
>> +		   DSU_PMU_MAX_HW_CNTRS);
>> +
>> +	if (!workset)
>> +		return IRQ_NONE;
>> +
>> +	for_each_set_bit(i, &workset, DSU_PMU_MAX_HW_CNTRS) {
>> +		struct perf_event *event = hw_events->events[i];
>> +
>> +		if (WARN_ON(!event))
>> +			continue;
>
> Unless we explicitly clear an event's overflow flag when we remove it,
> this could happen in normal operation. This should probably have a
> _ONCE(), if we think we need it.
>
> [...]
>


>> +static int dsu_pmu_add(struct perf_event *event, int flags)
>> +{
>> +	struct dsu_pmu *dsu_pmu = to_dsu_pmu(event->pmu);
>> +	struct dsu_hw_events *hw_events = &dsu_pmu->hw_events;
>> +	struct hw_perf_event *hwc = &event->hw;
>> +	int idx;
>> +
>> +	if (!cpumask_test_cpu(smp_processor_id(), &dsu_pmu->supported_cpus))
>> +		return -ENOENT;
>
> This shouldn't ever happen, and we can check against the active cpumask,
> with a WARN_ON_ONCE(). We have to do this for CPU PMUs since they
> support events which can migrate with tasks, but that's not the case
> here.
>
> [...]
>

But, we have to make sure we are reading the events from a CPU within this
DSU in case we have multiple DSU units. In any case, reading it from a CPU
outside the active cpumask is a tighter check. I will change it.


>> +static int dsu_pmu_validate_event(struct pmu *pmu,
>> +				  struct dsu_hw_events *hw_events,
>> +				  struct perf_event *event)
>> +{
>> +	if (is_software_event(event))
>> +		return 1;
>> +	/* Reject groups spanning multiple HW PMUs. */
>> +	if (event->pmu != pmu)
>> +		return 0;
>> +	if (event->state < PERF_EVENT_STATE_OFF)
>> +		return 1;
>> +	if (event->state == PERF_EVENT_STATE_OFF && !event->attr.enable_on_exec)
>> +		return 1;
>
> This doesn't make sense for a !CPU pmu, and I think you can nuke the
> prior test, too, which I think doesn't make sense.
>
>> +	return dsu_pmu_get_event_idx(hw_events, event) >= 0;
>> +}
>
> This whole function could be bool.
>

Agreed.

>> +/*
>> + * Make sure the group of events can be scheduled at once
>> + * on the PMU.
>> + */
>> +static int dsu_pmu_validate_group(struct perf_event *event)
>> +{
>> +	struct perf_event *sibling, *leader = event->group_leader;
>> +	struct dsu_hw_events fake_hw;
>> +
>> +	if (event->group_leader == event)
>> +		return 0;
>> +
>> +	memset(fake_hw.used_mask, 0, sizeof(fake_hw.used_mask));
>> +	if (!dsu_pmu_validate_event(event->pmu, &fake_hw, leader))
>> +		return -EINVAL;
>> +	list_for_each_entry(sibling, &leader->sibling_list, group_entry) {
>> +		if (!dsu_pmu_validate_event(event->pmu, &fake_hw, sibling))
>> +			return -EINVAL;
>> +	}
>> +	if (!dsu_pmu_validate_event(event->pmu, &fake_hw, event))
>> +		return -EINVAL;
>> +	return 0;
>> +}
>
> The return value has the opposite polarity to dsu_pmu_validate_event(),
> which is rather confusing.
>
> [...]
>

Yes, you're right. I will clean it up.

>> +static int dsu_pmu_event_init(struct perf_event *event)
>> +{
>> +	struct dsu_pmu *dsu_pmu = to_dsu_pmu(event->pmu);
>> +
>> +	if (event->attr.type != event->pmu->type)
>> +		return -ENOENT;
>> +
>> +	if (!dsu_pmu_event_supported(dsu_pmu, event->attr.config))
>> +		return -EOPNOTSUPP;
>
> I can see why we track this for the sysfs nodes, but why do we bother
> enforcing it for event_init()? Especially given we only enforce it for a
> subset of values...
>
> We don't do that for other PMUs, and assuming that this doesn't break
> the HW, I'd rather we allowed users to request raw values, even if we
> think the HW won't count them.

OK, I will remove it.

>> +
>> +	/* We cannot support task bound events */
>> +	if (event->cpu < 0) {
>> +		dev_dbg(dsu_pmu->pmu.dev, "Can't support per-task counters\n");
>> +		return -EINVAL;
>> +	}
>
> We should also check (event->attach_state & PERF_ATTACH_TASK), since you
> could have a per-task event with a CPU filter, and that doesn't make
> sense either.
>
> [...]

OK

>
>> +/**
>> + * dsu_pmu_dt_get_cpus: Get the list of CPUs in the cluster.
>> + */
>> +static int dsu_pmu_dt_get_cpus(struct device_node *dev, cpumask_t *mask)
>> +{
>> +	int i = 0, n, cpu;
>> +	struct device_node *cpu_node;
>> +
>> +	n = of_count_phandle_with_args(dev, "cpus", NULL);
>> +	if (n <= 0)
>> +		return -ENODEV;
>> +	for (; i < n; i++) {
>> +		cpu_node = of_parse_phandle(dev, "cpus", i);
>> +		if (!cpu_node)
>> +			break;
>> +		cpu = of_cpu_node_to_id(cpu_node);
>> +		of_node_put(cpu_node);
>> +		if (cpu < 0)
>> +			break;
>
> I believe this can happen if the kernel's nr_cpus were capped below the
> number of CPUs in the system. So we need to iterate all entries of the
> cpus list regardless.
>

Good point. Initial version of the driver used to ignore the failures, but
was changed later. I will roll it back.

>> +		cpumask_set_cpu(cpu, mask);
>> +	}
>> +	if (i != n)
>> +		return -EINVAL;
>> +	return 0;
>> +}
>
> [...]
>
>> +
>> +/*
>> + * dsu_pmu_probe_pmu: Probe the PMU details on a CPU in the cluster.
>> + */
>> +static void dsu_pmu_probe_pmu(void *data)
>> +{
>> +	struct dsu_pmu *dsu_pmu = data;
>> +	u64 cpmcr;
>> +	u32 cpmceid[2];
>> +
>> +	if (WARN_ON(!cpumask_test_cpu(smp_processor_id(),
>> +		&dsu_pmu->supported_cpus)))
>> +		return;
>
> I can't see how this can happen unless core kernel code is critically
> broken, given that the only caller uses smp_call_function_any() with the
> supported_cpus mask.
>
> Do we need this check?

You're right, we can remove it.

>
>> +	cpmcr = __dsu_pmu_read_pmcr();
>> +	dsu_pmu->num_counters = ((cpmcr >> CLUSTERPMCR_N_SHIFT) &
>> +					CLUSTERPMCR_N_MASK);
>> +	if (!dsu_pmu->num_counters)
>> +		return;
>
> Is that valid? What range of values makes sense?
>
> [...]
>

We should at least have one counter implemented (excluding the cycle counter).
And yes, we should check if the num_counters <= 31.

>> +	/*
>> +	 * Find one CPU in the DSU to handle the IRQs.
>> +	 * It is highly unlikely that we would fail
>> +	 * to find one, given the probing has succeeded.
>> +	 */
>> +	cpu = dsu_pmu_get_online_cpu(dsu_pmu);
>> +	if (cpu >= nr_cpu_ids)
>> +		return -ENODEV;
>> +	cpumask_set_cpu(cpu, &dsu_pmu->active_cpu);
>> +	rc = irq_set_affinity_hint(irq, &dsu_pmu->active_cpu);
>
> Is setting the affinity hint strong enough?
>
> Can the affinity be changed behind the back of this driver?
>

Did you mean to use "force"d affinity settings ? If so, modules
don't have the luxury of doing that. Hence this one. I think that also
brings up the problem where we could be reading the counters from
a different CPU than we requested. So, I think it would be good to keep
the CPU check, wherever we could access the PMU.

>> +static int dsu_pmu_cpu_teardown(unsigned int cpu, struct hlist_node *node)
>> +{
>> +	int dst;
>> +	struct dsu_pmu *dsu_pmu = hlist_entry_safe(node, struct dsu_pmu,
>> +						   cpuhp_node);
>> +
>> +	if (!cpumask_test_and_clear_cpu(cpu, &dsu_pmu->active_cpu))
>> +		return 0;
>> +
>> +	dst = dsu_pmu_get_online_cpu_any_but(dsu_pmu, cpu);
>> +	if (dst < nr_cpu_ids) {
>> +		cpumask_set_cpu(dst, &dsu_pmu->active_cpu);
>> +		perf_pmu_migrate_context(&dsu_pmu->pmu, cpu, dst);
>> +		irq_set_affinity_hint(dsu_pmu->irq, &dsu_pmu->active_cpu);
>
> Likewise, is this sufficient?

Thanks a lot for the detailed review !

Cheers
Suzuki

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 5/6] dt-bindings: Document devicetree binding for ARM DSU PMU
  2017-08-08 11:37 ` [PATCH v5 5/6] dt-bindings: Document devicetree binding for ARM DSU PMU Suzuki K Poulose
@ 2017-08-17 15:09   ` Rob Herring
  0 siblings, 0 replies; 13+ messages in thread
From: Rob Herring @ 2017-08-17 15:09 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: linux-arm-kernel, will.deacon, marc.zyngier, mark.rutland,
	devicetree, linux-kernel, frowand.list, mathieu.poirier, peterz

On Tue, Aug 08, 2017 at 12:37:25PM +0100, Suzuki K Poulose wrote:
> This patch documents the devicetree bindings for ARM DSU PMU.
> 
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Rob Herring <robh@kernel.org>
> Cc: devicetree@vger.kernel.org
> Cc: frowand.list@gmail.com
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
> Changes since V3:
>  - Fixed node name in the example, suggested by Rob
> ---
>  .../devicetree/bindings/arm/arm-dsu-pmu.txt        | 27 ++++++++++++++++++++++
>  1 file changed, 27 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/arm/arm-dsu-pmu.txt

Acked-by: Rob Herring <robh@kernel.org>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 6/6] perf: ARM DynamIQ Shared Unit PMU support
  2017-08-17 14:52     ` Suzuki K Poulose
@ 2017-08-17 15:57       ` Mark Rutland
  2017-08-18 10:43         ` Suzuki K Poulose
  0 siblings, 1 reply; 13+ messages in thread
From: Mark Rutland @ 2017-08-17 15:57 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: linux-arm-kernel, will.deacon, marc.zyngier, devicetree,
	linux-kernel, robh, frowand.list, mathieu.poirier, peterz

On Thu, Aug 17, 2017 at 03:52:24PM +0100, Suzuki K Poulose wrote:
> On 16/08/17 15:10, Mark Rutland wrote:
> >On Tue, Aug 08, 2017 at 12:37:26PM +0100, Suzuki K Poulose wrote:

> >>+static struct attribute *dsu_pmu_event_attrs[] = {
> >>+	DSU_EVENT_ATTR(cycles, 0x11),
> >>+	DSU_EVENT_ATTR(bus_acecss, 0x19),
> >>+	DSU_EVENT_ATTR(memory_error, 0x1a),
> >>+	DSU_EVENT_ATTR(bus_cycles, 0x1d),
> >>+	DSU_EVENT_ATTR(l3d_cache_allocate, 0x29),
> >>+	DSU_EVENT_ATTR(l3d_cache_refill, 0x2a),
> >>+	DSU_EVENT_ATTR(l3d_cache, 0x2b),
> >>+	DSU_EVENT_ATTR(l3d_cache_wb, 0x2c),
> >
> >MAX_COMMON_EVENTS seems to be 0x40, so are we just assuming the below
> >are implemented?
> >
> >If so, why bother exposing them at all? We can't know that they're going
> >to work...
> 
> Thats right. The only defending argument is that the event codes are specific
> to the DynamIQ, unlike the COMMON_EVENTS which matches the ARMv8 PMU event codes.
> So, someone would need to carefully find the event code for a particular event.
> Having these entries would make it easier for the user to do the profiling.
> 
> Also, future revisions of the DSU could potentially expose more events. So there
> wouldn't be any way to tell the user (provided there aren't any changes to the
> programming model and we decide to reuse the same compatible string) what we *could*
> potentially support. In short, this is not a problem at the moment and we could
> do something about it as and when required.

I'd rather that we only describes those that we can probe from the
PMCEID* registers, and for the rest, left the user to consult a manual.

I can well imagine future variants of this IP supporing different
events, and I'd prefer to avoid poriflerating tables for those.

[...]

> >>+static struct attribute *dsu_pmu_cpumask_attrs[] = {
> >>+	DSU_CPUMASK_ATTR(cpumask, DSU_ACTIVE_CPU_MASK),
> >>+	DSU_CPUMASK_ATTR(supported_cpus, DSU_SUPPORTED_CPU_MASK),
> >>+	NULL,
> >>+};
> >
> >Normally we only expose one mask.
> >
> >Why do we need the supported cpu mask? What is the intended use-case?
> 
> Thats just to let the user know the CPUs bound to this PMU instance.

I guess that can be useful, though the cpumasks we expose today are
confusing as-is, and this is another point of confusion.

We could drop this for now, and add it when requested, or we should try
to make the naming clearer somehow -- "supported" can be read in a
number of ways.

Further, it would be worth documenting this PMU under
Documentation/perf/.

[...]

> >>+static int dsu_pmu_add(struct perf_event *event, int flags)
> >>+{
> >>+	struct dsu_pmu *dsu_pmu = to_dsu_pmu(event->pmu);
> >>+	struct dsu_hw_events *hw_events = &dsu_pmu->hw_events;
> >>+	struct hw_perf_event *hwc = &event->hw;
> >>+	int idx;
> >>+
> >>+	if (!cpumask_test_cpu(smp_processor_id(), &dsu_pmu->supported_cpus))
> >>+		return -ENOENT;
> >
> >This shouldn't ever happen, and we can check against the active cpumask,
> >with a WARN_ON_ONCE(). We have to do this for CPU PMUs since they
> >support events which can migrate with tasks, but that's not the case
> >here.
> >
> >[...]
> 
> But, we have to make sure we are reading the events from a CPU within this
> DSU in case we have multiple DSU units.

Regardless of how many instances there are, the core should *never*
add() a CPU-bound event (which DSU events are) on another CPU. To do so
would be a major bug.

So if this is just a paranoid check, we should WARN_ON_ONCE().
Otherwise, it's unnecessary.

> >>+/**
> >>+ * dsu_pmu_dt_get_cpus: Get the list of CPUs in the cluster.
> >>+ */
> >>+static int dsu_pmu_dt_get_cpus(struct device_node *dev, cpumask_t *mask)
> >>+{
> >>+	int i = 0, n, cpu;
> >>+	struct device_node *cpu_node;
> >>+
> >>+	n = of_count_phandle_with_args(dev, "cpus", NULL);
> >>+	if (n <= 0)
> >>+		return -ENODEV;
> >>+	for (; i < n; i++) {
> >>+		cpu_node = of_parse_phandle(dev, "cpus", i);
> >>+		if (!cpu_node)
> >>+			break;
> >>+		cpu = of_cpu_node_to_id(cpu_node);
> >>+		of_node_put(cpu_node);
> >>+		if (cpu < 0)
> >>+			break;
> >
> >I believe this can happen if the kernel's nr_cpus were capped below the
> >number of CPUs in the system. So we need to iterate all entries of the
> >cpus list regardless.
> >
> 
> Good point. Initial version of the driver used to ignore the failures, but
> was changed later. I will roll it back.

Thanks. If you could add a comment as to why, that'll hopefully avoid
anyone trying to "fix" the logic later.

[...]

> >>+	cpmcr = __dsu_pmu_read_pmcr();
> >>+	dsu_pmu->num_counters = ((cpmcr >> CLUSTERPMCR_N_SHIFT) &
> >>+					CLUSTERPMCR_N_MASK);
> >>+	if (!dsu_pmu->num_counters)
> >>+		return;
> >
> >Is that valid? What range of values makes sense?
> >
> >[...]
> >
> 
> We should at least have one counter implemented (excluding the cycle counter).
> And yes, we should check if the num_counters <= 31.

Ok.

> >>+	/*
> >>+	 * Find one CPU in the DSU to handle the IRQs.
> >>+	 * It is highly unlikely that we would fail
> >>+	 * to find one, given the probing has succeeded.
> >>+	 */
> >>+	cpu = dsu_pmu_get_online_cpu(dsu_pmu);
> >>+	if (cpu >= nr_cpu_ids)
> >>+		return -ENODEV;
> >>+	cpumask_set_cpu(cpu, &dsu_pmu->active_cpu);
> >>+	rc = irq_set_affinity_hint(irq, &dsu_pmu->active_cpu);
> >
> >Is setting the affinity hint strong enough?
> >
> >Can the affinity be changed behind the back of this driver?
> 
> Did you mean to use "force"d affinity settings ? If so, modules
> don't have the luxury of doing that. 

Perhaps. We absolutely need to ensure that the driver makes the IRQ
affine to the active CPU, and no other SW will change the affinitiy of
the IRQ.

Otherwise, the IRQ handler is dangerous, violating locking requirements,
potentially corrupting memory, etc.

> Hence this one. I think that also brings up the problem where we could
> be reading the counters from a different CPU than we requested. So, I
> think it would be good to keep the CPU check, wherever we could access
> the PMU.

While I'm happy to have that as a paranoid sanity check, we cannot rely
upon that for correctness. We must ensure that we amange the interupt
affinity correctly.

If that means we need the forced affinity helpers, we must ensure that
we have access to those.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 6/6] perf: ARM DynamIQ Shared Unit PMU support
  2017-08-17 15:57       ` Mark Rutland
@ 2017-08-18 10:43         ` Suzuki K Poulose
  2017-08-18 10:49           ` Mark Rutland
  0 siblings, 1 reply; 13+ messages in thread
From: Suzuki K Poulose @ 2017-08-18 10:43 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, will.deacon, marc.zyngier, devicetree,
	linux-kernel, robh, frowand.list, mathieu.poirier, peterz

On 17/08/17 16:57, Mark Rutland wrote:
> On Thu, Aug 17, 2017 at 03:52:24PM +0100, Suzuki K Poulose wrote:
>> On 16/08/17 15:10, Mark Rutland wrote:
>>> On Tue, Aug 08, 2017 at 12:37:26PM +0100, Suzuki K Poulose wrote:
>
>>>> +static struct attribute *dsu_pmu_event_attrs[] = {
>>>> +	DSU_EVENT_ATTR(cycles, 0x11),
>>>> +	DSU_EVENT_ATTR(bus_acecss, 0x19),
>>>> +	DSU_EVENT_ATTR(memory_error, 0x1a),
>>>> +	DSU_EVENT_ATTR(bus_cycles, 0x1d),
>>>> +	DSU_EVENT_ATTR(l3d_cache_allocate, 0x29),
>>>> +	DSU_EVENT_ATTR(l3d_cache_refill, 0x2a),
>>>> +	DSU_EVENT_ATTR(l3d_cache, 0x2b),
>>>> +	DSU_EVENT_ATTR(l3d_cache_wb, 0x2c),
>>>
>>> MAX_COMMON_EVENTS seems to be 0x40, so are we just assuming the below
>>> are implemented?
>>>
>>> If so, why bother exposing them at all? We can't know that they're going
>>> to work...
>>
>> Thats right. The only defending argument is that the event codes are specific
>> to the DynamIQ, unlike the COMMON_EVENTS which matches the ARMv8 PMU event codes.
>> So, someone would need to carefully find the event code for a particular event.
>> Having these entries would make it easier for the user to do the profiling.
>>
>> Also, future revisions of the DSU could potentially expose more events. So there
>> wouldn't be any way to tell the user (provided there aren't any changes to the
>> programming model and we decide to reuse the same compatible string) what we *could*
>> potentially support. In short, this is not a problem at the moment and we could
>> do something about it as and when required.
>
> I'd rather that we only describes those that we can probe from the
> PMCEID* registers, and for the rest, left the user to consult a manual.
>
> I can well imagine future variants of this IP supporing different
> events, and I'd prefer to avoid poriflerating tables for those.
>
> [...]

Fair enough. I will trim the list.


>>>> +static struct attribute *dsu_pmu_cpumask_attrs[] = {
>>>> +	DSU_CPUMASK_ATTR(cpumask, DSU_ACTIVE_CPU_MASK),
>>>> +	DSU_CPUMASK_ATTR(supported_cpus, DSU_SUPPORTED_CPU_MASK),
>>>> +	NULL,
>>>> +};
>>>
>>> Normally we only expose one mask.
>>>
>>> Why do we need the supported cpu mask? What is the intended use-case?
>>
>> Thats just to let the user know the CPUs bound to this PMU instance.
>
> I guess that can be useful, though the cpumasks we expose today are
> confusing as-is, and this is another point of confusion.
>
> We could drop this for now, and add it when requested, or we should try
> to make the naming clearer somehow -- "supported" can be read in a
> number of ways.

How about dsu_cpus or connected_cpus ?

>
> Further, it would be worth documenting this PMU under
> Documentation/perf/.
>
> [...]
>

OK


>>>> +static int dsu_pmu_add(struct perf_event *event, int flags)
>>>> +{
>>>> +	struct dsu_pmu *dsu_pmu = to_dsu_pmu(event->pmu);
>>>> +	struct dsu_hw_events *hw_events = &dsu_pmu->hw_events;
>>>> +	struct hw_perf_event *hwc = &event->hw;
>>>> +	int idx;
>>>> +
>>>> +	if (!cpumask_test_cpu(smp_processor_id(), &dsu_pmu->supported_cpus))
>>>> +		return -ENOENT;
>>>
>>> This shouldn't ever happen, and we can check against the active cpumask,
>>> with a WARN_ON_ONCE(). We have to do this for CPU PMUs since they
>>> support events which can migrate with tasks, but that's not the case
>>> here.
>>>
>>> [...]
>>
>> But, we have to make sure we are reading the events from a CPU within this
>> DSU in case we have multiple DSU units.
>
> Regardless of how many instances there are, the core should *never*
> add() a CPU-bound event (which DSU events are) on another CPU. To do so
> would be a major bug.
>
> So if this is just a paranoid check, we should WARN_ON_ONCE().
> Otherwise, it's unnecessary.

OK

>
>>>> +/**
>>>> + * dsu_pmu_dt_get_cpus: Get the list of CPUs in the cluster.
>>>> + */
>>>> +static int dsu_pmu_dt_get_cpus(struct device_node *dev, cpumask_t *mask)
>>>> +{
>>>> +	int i = 0, n, cpu;
>>>> +	struct device_node *cpu_node;
>>>> +
>>>> +	n = of_count_phandle_with_args(dev, "cpus", NULL);
>>>> +	if (n <= 0)
>>>> +		return -ENODEV;
>>>> +	for (; i < n; i++) {
>>>> +		cpu_node = of_parse_phandle(dev, "cpus", i);
>>>> +		if (!cpu_node)
>>>> +			break;
>>>> +		cpu = of_cpu_node_to_id(cpu_node);
>>>> +		of_node_put(cpu_node);
>>>> +		if (cpu < 0)
>>>> +			break;
>>>
>>> I believe this can happen if the kernel's nr_cpus were capped below the
>>> number of CPUs in the system. So we need to iterate all entries of the
>>> cpus list regardless.
>>>
>>
>> Good point. Initial version of the driver used to ignore the failures, but
>> was changed later. I will roll it back.
>
> Thanks. If you could add a comment as to why, that'll hopefully avoid
> anyone trying to "fix" the logic later.
>
> [...]
>

Sure

>>>> +	cpmcr = __dsu_pmu_read_pmcr();
>>>> +	dsu_pmu->num_counters = ((cpmcr >> CLUSTERPMCR_N_SHIFT) &
>>>> +					CLUSTERPMCR_N_MASK);
>>>> +	if (!dsu_pmu->num_counters)
>>>> +		return;
>>>
>>> Is that valid? What range of values makes sense?
>>>
>>> [...]
>>>
>>
>> We should at least have one counter implemented (excluding the cycle counter).
>> And yes, we should check if the num_counters <= 31.
>
> Ok.
>
>>>> +	/*
>>>> +	 * Find one CPU in the DSU to handle the IRQs.
>>>> +	 * It is highly unlikely that we would fail
>>>> +	 * to find one, given the probing has succeeded.
>>>> +	 */
>>>> +	cpu = dsu_pmu_get_online_cpu(dsu_pmu);
>>>> +	if (cpu >= nr_cpu_ids)
>>>> +		return -ENODEV;
>>>> +	cpumask_set_cpu(cpu, &dsu_pmu->active_cpu);
>>>> +	rc = irq_set_affinity_hint(irq, &dsu_pmu->active_cpu);
>>>
>>> Is setting the affinity hint strong enough?
>>>
>>> Can the affinity be changed behind the back of this driver?
>>
>> Did you mean to use "force"d affinity settings ? If so, modules
>> don't have the luxury of doing that.
>
> Perhaps. We absolutely need to ensure that the driver makes the IRQ
> affine to the active CPU, and no other SW will change the affinitiy of
> the IRQ.
>
> Otherwise, the IRQ handler is dangerous, violating locking requirements,
> potentially corrupting memory, etc.
>
>> Hence this one. I think that also brings up the problem where we could
>> be reading the counters from a different CPU than we requested. So, I
>> think it would be good to keep the CPU check, wherever we could access
>> the PMU.
>
> While I'm happy to have that as a paranoid sanity check, we cannot rely
> upon that for correctness. We must ensure that we amange the interupt
> affinity correctly.
>
> If that means we need the forced affinity helpers, we must ensure that
> we have access to those.

As per our offline discussion, I will go ahead with set_affinity_hint and
IRQ_NO_BALANCING flag, so that the IRQ affinity is not disturbed by the
userspace.

Suzuki

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 6/6] perf: ARM DynamIQ Shared Unit PMU support
  2017-08-18 10:43         ` Suzuki K Poulose
@ 2017-08-18 10:49           ` Mark Rutland
  0 siblings, 0 replies; 13+ messages in thread
From: Mark Rutland @ 2017-08-18 10:49 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: linux-arm-kernel, will.deacon, marc.zyngier, devicetree,
	linux-kernel, robh, frowand.list, mathieu.poirier, peterz

On Fri, Aug 18, 2017 at 11:43:32AM +0100, Suzuki K Poulose wrote:
> On 17/08/17 16:57, Mark Rutland wrote:
> >On Thu, Aug 17, 2017 at 03:52:24PM +0100, Suzuki K Poulose wrote:
> >>On 16/08/17 15:10, Mark Rutland wrote:
> >>>On Tue, Aug 08, 2017 at 12:37:26PM +0100, Suzuki K Poulose wrote:

> >>>>+static struct attribute *dsu_pmu_cpumask_attrs[] = {
> >>>>+	DSU_CPUMASK_ATTR(cpumask, DSU_ACTIVE_CPU_MASK),
> >>>>+	DSU_CPUMASK_ATTR(supported_cpus, DSU_SUPPORTED_CPU_MASK),
> >>>>+	NULL,
> >>>>+};
> >>>
> >>>Normally we only expose one mask.
> >>>
> >>>Why do we need the supported cpu mask? What is the intended use-case?
> >>
> >>Thats just to let the user know the CPUs bound to this PMU instance.
> >
> >I guess that can be useful, though the cpumasks we expose today are
> >confusing as-is, and this is another point of confusion.
> >
> >We could drop this for now, and add it when requested, or we should try
> >to make the naming clearer somehow -- "supported" can be read in a
> >number of ways.
> 
> How about dsu_cpus or connected_cpus ?

I think "connected_cpus" or "associated_cpus" might be ok.

Thinking a little further, this is something other PMUs might want to
expose (and perhaps some x86 PMUs already do?), so it would be good to
align naming-wise. 

[...]

> >>>>+	/*
> >>>>+	 * Find one CPU in the DSU to handle the IRQs.
> >>>>+	 * It is highly unlikely that we would fail
> >>>>+	 * to find one, given the probing has succeeded.
> >>>>+	 */
> >>>>+	cpu = dsu_pmu_get_online_cpu(dsu_pmu);
> >>>>+	if (cpu >= nr_cpu_ids)
> >>>>+		return -ENODEV;
> >>>>+	cpumask_set_cpu(cpu, &dsu_pmu->active_cpu);
> >>>>+	rc = irq_set_affinity_hint(irq, &dsu_pmu->active_cpu);
> >>>
> >>>Is setting the affinity hint strong enough?
> >>>
> >>>Can the affinity be changed behind the back of this driver?
> >>
> >>Did you mean to use "force"d affinity settings ? If so, modules
> >>don't have the luxury of doing that.
> >
> >Perhaps. We absolutely need to ensure that the driver makes the IRQ
> >affine to the active CPU, and no other SW will change the affinitiy of
> >the IRQ.
> >
> >Otherwise, the IRQ handler is dangerous, violating locking requirements,
> >potentially corrupting memory, etc.
> >
> >>Hence this one. I think that also brings up the problem where we could
> >>be reading the counters from a different CPU than we requested. So, I
> >>think it would be good to keep the CPU check, wherever we could access
> >>the PMU.
> >
> >While I'm happy to have that as a paranoid sanity check, we cannot rely
> >upon that for correctness. We must ensure that we amange the interupt
> >affinity correctly.
> >
> >If that means we need the forced affinity helpers, we must ensure that
> >we have access to those.
> 
> As per our offline discussion, I will go ahead with set_affinity_hint and
> IRQ_NO_BALANCING flag, so that the IRQ affinity is not disturbed by the
> userspace.

Sounds good.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2017-08-18 10:50 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-08 11:37 [PATCH v5 0/6] perf: Support for ARM DynamIQ Shared Unit Suzuki K Poulose
2017-08-08 11:37 ` [PATCH v5 1/6] perf: Export perf_event_update_userpage Suzuki K Poulose
2017-08-08 11:37 ` [PATCH v5 2/6] of: Add helper for mapping device node to logical CPU number Suzuki K Poulose
2017-08-08 11:37 ` [PATCH v5 3/6] coresight: of: Use of_cpu_node_to_id helper Suzuki K Poulose
2017-08-08 11:37 ` [PATCH v5 4/6] irqchip: gic-v3: " Suzuki K Poulose
2017-08-08 11:37 ` [PATCH v5 5/6] dt-bindings: Document devicetree binding for ARM DSU PMU Suzuki K Poulose
2017-08-17 15:09   ` Rob Herring
2017-08-08 11:37 ` [PATCH v5 6/6] perf: ARM DynamIQ Shared Unit PMU support Suzuki K Poulose
2017-08-16 14:10   ` Mark Rutland
2017-08-17 14:52     ` Suzuki K Poulose
2017-08-17 15:57       ` Mark Rutland
2017-08-18 10:43         ` Suzuki K Poulose
2017-08-18 10:49           ` Mark Rutland

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).