All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHv2 00/16] arm_pmu: ACPI support
@ 2017-04-06 18:29 Mark Rutland
  2017-04-06 18:29 ` [PATCHv2 01/16] drivers/perf: arm_pmu: remove pointless PMU disabling Mark Rutland
                   ` (16 more replies)
  0 siblings, 17 replies; 25+ messages in thread
From: Mark Rutland @ 2017-04-06 18:29 UTC (permalink / raw)
  To: linux-arm-kernel

Hi,

This series implements ACPI support in the ARM PMU code. It borrows some code
from Jeremy's series [1], but takes a different approach to probing and
association, using the usual hotplug state machine, minimising external changes
required, and simplifying the relationship with the common arm_pmu code.

The first few patches are preparatory cleanup/refactoring, with the latter half
of the series being specific to ACPI support.

The series is based on Will's perf/updates branch [2]. I've pushed the whole
series out to the arm/perf/acpi branch [3] of my kernel.org repo.

Due to the innards of the hotplug callback framework, it's not entirely
safe to register a PMU in a hotplug callback. Due to this, we can only
associated hotplugged CPUs with a PMU if a matching CPU was around at
probe time. A similar restriction already applies to DT systems. We may
be able to relax this with some future work.

I've given this some testing on a Juno platform (using SPIs). To see that IRQs
are correctly associated, I've tested with the following:

  $ taskset -c ${SOME_CPU_HERE} perf record \
    -e armv8_pmuv3_0/cpu_cycles/ \
    -e armv8_pmuv3_1/cpu_cycles/ \
    cat /proc/interrupts
 
I've also booted with nr_cpus temporarily capped (passing maxcpus=) to test the
association logic. This has also been tested in a VM using PPIs; I do not have
access to a host machine which itself uses PPIs.

Since v1 [4]:
* Fix up ACPI bits per Lorenzo's request
* Accumulate tags
* Check presence of PMUv3

Thanks,
Mark.

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2017-January/482397.html
[2] git://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git perf/updates
[3] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git arm/perf/acpi
[4] http://lists.infradead.org/pipermail/linux-arm-kernel/2017-March/492897.html

Mark Rutland (16):
  drivers/perf: arm_pmu: remove pointless PMU disabling
  drivers/perf: arm_pmu: define armpmu_init_fn
  drivers/perf: arm_pmu: fold init into alloc
  drivers/perf: arm_pmu: factor out pmu registration
  drivers/perf: arm_pmu: simplify cpu_pmu_request_irqs()
  drivers/perf: arm_pmu: handle no platform_device
  drivers/perf: arm_pmu: rename irq request/free functions
  drivers/perf: arm_pmu: split cpu-local irq request/free
  drivers/perf: arm_pmu: move irq request/free into probe
  drivers/perf: arm_pmu: split out platform device probe logic
  arm64: add function to get a cpu's MADT GICC table
  arm64: parking: kill acpi_set_mailbox_entry()
  arm64: parking: fix type endianness
  drivers/perf: arm_pmu: add ACPI framework
  arm64: pmuv3: handle !PMUv3 when probing
  arm64: pmuv3: use arm_pmu ACPI framework

 arch/arm64/include/asm/acpi.h             |   7 +-
 arch/arm64/kernel/acpi_parking_protocol.c |  44 ++--
 arch/arm64/kernel/perf_event.c            | 111 ++++++---
 arch/arm64/kernel/smp.c                   |  19 +-
 drivers/perf/Kconfig                      |   4 +
 drivers/perf/Makefile                     |   3 +-
 drivers/perf/arm_pmu.c                    | 387 ++++++++----------------------
 drivers/perf/arm_pmu_acpi.c               | 240 ++++++++++++++++++
 drivers/perf/arm_pmu_platform.c           | 235 ++++++++++++++++++
 include/linux/cpuhotplug.h                |   1 +
 include/linux/perf/arm_pmu.h              |  22 +-
 11 files changed, 704 insertions(+), 369 deletions(-)
 create mode 100644 drivers/perf/arm_pmu_acpi.c
 create mode 100644 drivers/perf/arm_pmu_platform.c

-- 
1.9.1

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCHv2 01/16] drivers/perf: arm_pmu: remove pointless PMU disabling
  2017-04-06 18:29 [PATCHv2 00/16] arm_pmu: ACPI support Mark Rutland
@ 2017-04-06 18:29 ` Mark Rutland
  2017-04-06 18:29 ` [PATCHv2 02/16] drivers/perf: arm_pmu: define armpmu_init_fn Mark Rutland
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Mark Rutland @ 2017-04-06 18:29 UTC (permalink / raw)
  To: linux-arm-kernel

We currently disable the PMU temporarily in armpmu_add(). We may have
required this historically, but the perf core always disables an event's
PMU when calling event::pmu::add(), so this is not necessary.

We don't do similarly in armpmu_del(), or elsewhere, so this is
unnecessary and inconsistent, and only serves to confuse the reader.

Remove the pointless disable, simplifying armpmu_add() in the process.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 drivers/perf/arm_pmu.c | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index a1dfe89..316c4dc 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -235,20 +235,15 @@ static void armpmu_start(struct perf_event *event, int flags)
 	struct pmu_hw_events *hw_events = this_cpu_ptr(armpmu->hw_events);
 	struct hw_perf_event *hwc = &event->hw;
 	int idx;
-	int err = 0;
 
 	/* An event following a process won't be stopped earlier */
 	if (!cpumask_test_cpu(smp_processor_id(), &armpmu->supported_cpus))
 		return -ENOENT;
 
-	perf_pmu_disable(event->pmu);
-
 	/* If we don't have a space for the counter then finish early. */
 	idx = armpmu->get_event_idx(hw_events, event);
-	if (idx < 0) {
-		err = idx;
-		goto out;
-	}
+	if (idx < 0)
+		return idx;
 
 	/*
 	 * If there is an event in the counter we are going to use then make
@@ -265,9 +260,7 @@ static void armpmu_start(struct perf_event *event, int flags)
 	/* Propagate our changes to the userspace mapping. */
 	perf_event_update_userpage(event);
 
-out:
-	perf_pmu_enable(event->pmu);
-	return err;
+	return 0;
 }
 
 static int
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCHv2 02/16] drivers/perf: arm_pmu: define armpmu_init_fn
  2017-04-06 18:29 [PATCHv2 00/16] arm_pmu: ACPI support Mark Rutland
  2017-04-06 18:29 ` [PATCHv2 01/16] drivers/perf: arm_pmu: remove pointless PMU disabling Mark Rutland
@ 2017-04-06 18:29 ` Mark Rutland
  2017-04-06 18:29 ` [PATCHv2 03/16] drivers/perf: arm_pmu: fold init into alloc Mark Rutland
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Mark Rutland @ 2017-04-06 18:29 UTC (permalink / raw)
  To: linux-arm-kernel

We expect an ARM PMU's init function to have a particular prototype,
which we open-code in a few places. This is less than ideal, considering
that we cast a void value to this type in one location, and a mismatch
could easily be missed.

Add a typedef so that we can ensure this is consistent.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 drivers/perf/arm_pmu.c       | 2 +-
 include/linux/perf/arm_pmu.h | 4 +++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 316c4dc..71a825d 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -989,7 +989,7 @@ int arm_pmu_device_probe(struct platform_device *pdev,
 			 const struct pmu_probe_info *probe_table)
 {
 	const struct of_device_id *of_id;
-	const int (*init_fn)(struct arm_pmu *);
+	armpmu_init_fn init_fn;
 	struct device_node *node = pdev->dev.of_node;
 	struct arm_pmu *pmu;
 	int ret = -ENODEV;
diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index 44f43fc..4249914 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -132,10 +132,12 @@ int armpmu_map_event(struct perf_event *event,
 						[PERF_COUNT_HW_CACHE_RESULT_MAX],
 		     u32 raw_event_mask);
 
+typedef int (*armpmu_init_fn)(struct arm_pmu *);
+
 struct pmu_probe_info {
 	unsigned int cpuid;
 	unsigned int mask;
-	int (*init)(struct arm_pmu *);
+	armpmu_init_fn init;
 };
 
 #define PMU_PROBE(_cpuid, _mask, _fn)	\
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCHv2 03/16] drivers/perf: arm_pmu: fold init into alloc
  2017-04-06 18:29 [PATCHv2 00/16] arm_pmu: ACPI support Mark Rutland
  2017-04-06 18:29 ` [PATCHv2 01/16] drivers/perf: arm_pmu: remove pointless PMU disabling Mark Rutland
  2017-04-06 18:29 ` [PATCHv2 02/16] drivers/perf: arm_pmu: define armpmu_init_fn Mark Rutland
@ 2017-04-06 18:29 ` Mark Rutland
  2017-04-06 18:29 ` [PATCHv2 04/16] drivers/perf: arm_pmu: factor out pmu registration Mark Rutland
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Mark Rutland @ 2017-04-06 18:29 UTC (permalink / raw)
  To: linux-arm-kernel

Given we always want to initialise common fields on an allocated PMU,
this patch folds this common initialisation into armpmu_alloc(). This
will make it simpler to reuse this code for an ACPI-specific probe path.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 drivers/perf/arm_pmu.c | 52 +++++++++++++++++++++++---------------------------
 1 file changed, 24 insertions(+), 28 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 71a825d..1cb8b1a 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -494,24 +494,6 @@ static ssize_t armpmu_cpumask_show(struct device *dev,
 	.attrs = armpmu_common_attrs,
 };
 
-static void armpmu_init(struct arm_pmu *armpmu)
-{
-	armpmu->pmu = (struct pmu) {
-		.pmu_enable	= armpmu_enable,
-		.pmu_disable	= armpmu_disable,
-		.event_init	= armpmu_event_init,
-		.add		= armpmu_add,
-		.del		= armpmu_del,
-		.start		= armpmu_start,
-		.stop		= armpmu_stop,
-		.read		= armpmu_read,
-		.filter_match	= armpmu_filter_match,
-		.attr_groups	= armpmu->attr_groups,
-	};
-	armpmu->attr_groups[ARMPMU_ATTR_GROUP_COMMON] =
-		&armpmu_common_attr_group;
-}
-
 /* Set at runtime when we know what CPU type we are. */
 static struct arm_pmu *__oprofile_cpu_pmu;
 
@@ -766,14 +748,6 @@ static int cpu_pmu_init(struct arm_pmu *cpu_pmu)
 	if (err)
 		goto out_unregister;
 
-	/*
-	 * This is a CPU PMU potentially in a heterogeneous configuration (e.g.
-	 * big.LITTLE). This is not an uncore PMU, and we have taken ctx
-	 * sharing into account (e.g. with our pmu::filter_match callback and
-	 * pmu::event_init group validation).
-	 */
-	cpu_pmu->pmu.capabilities |= PERF_PMU_CAP_HETEROGENEOUS_CPUS;
-
 	return 0;
 
 out_unregister:
@@ -962,6 +936,30 @@ static struct arm_pmu *armpmu_alloc(void)
 		goto out_free_pmu;
 	}
 
+	pmu->pmu = (struct pmu) {
+		.pmu_enable	= armpmu_enable,
+		.pmu_disable	= armpmu_disable,
+		.event_init	= armpmu_event_init,
+		.add		= armpmu_add,
+		.del		= armpmu_del,
+		.start		= armpmu_start,
+		.stop		= armpmu_stop,
+		.read		= armpmu_read,
+		.filter_match	= armpmu_filter_match,
+		.attr_groups	= pmu->attr_groups,
+		/*
+		 * This is a CPU PMU potentially in a heterogeneous
+		 * configuration (e.g. big.LITTLE). This is not an uncore PMU,
+		 * and we have taken ctx sharing into account (e.g. with our
+		 * pmu::filter_match callback and pmu::event_init group
+		 * validation).
+		 */
+		.capabilities	= PERF_PMU_CAP_HETEROGENEOUS_CPUS,
+	};
+
+	pmu->attr_groups[ARMPMU_ATTR_GROUP_COMMON] =
+		&armpmu_common_attr_group;
+
 	for_each_possible_cpu(cpu) {
 		struct pmu_hw_events *events;
 
@@ -998,8 +996,6 @@ int arm_pmu_device_probe(struct platform_device *pdev,
 	if (!pmu)
 		return -ENOMEM;
 
-	armpmu_init(pmu);
-
 	pmu->plat_device = pdev;
 
 	ret = pmu_parse_irqs(pmu);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCHv2 04/16] drivers/perf: arm_pmu: factor out pmu registration
  2017-04-06 18:29 [PATCHv2 00/16] arm_pmu: ACPI support Mark Rutland
                   ` (2 preceding siblings ...)
  2017-04-06 18:29 ` [PATCHv2 03/16] drivers/perf: arm_pmu: fold init into alloc Mark Rutland
@ 2017-04-06 18:29 ` Mark Rutland
  2017-04-06 18:29 ` [PATCHv2 05/16] drivers/perf: arm_pmu: simplify cpu_pmu_request_irqs() Mark Rutland
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Mark Rutland @ 2017-04-06 18:29 UTC (permalink / raw)
  To: linux-arm-kernel

Currently arm_pmu_device_probe contains probing logic specific to the
platform_device infrastructure, and some logic required to safely
register the PMU with various systems.

This patch factors out the logic relating to the registration of the
PMU. This makes arm_pmu_device_probe a little easier to read, and will
make it easier to reuse the logic for an ACPI-specific probing
mechanism.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 drivers/perf/arm_pmu.c | 40 ++++++++++++++++++++++++++--------------
 1 file changed, 26 insertions(+), 14 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 1cb8b1a..985bd08 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -982,6 +982,31 @@ static void armpmu_free(struct arm_pmu *pmu)
 	kfree(pmu);
 }
 
+int armpmu_register(struct arm_pmu *pmu)
+{
+	int ret;
+
+	ret = cpu_pmu_init(pmu);
+	if (ret)
+		return ret;
+
+	ret = perf_pmu_register(&pmu->pmu, pmu->name, -1);
+	if (ret)
+		goto out_destroy;
+
+	if (!__oprofile_cpu_pmu)
+		__oprofile_cpu_pmu = pmu;
+
+	pr_info("enabled with %s PMU driver, %d counters available\n",
+		pmu->name, pmu->num_events);
+
+	return 0;
+
+out_destroy:
+	cpu_pmu_destroy(pmu);
+	return ret;
+}
+
 int arm_pmu_device_probe(struct platform_device *pdev,
 			 const struct of_device_id *of_table,
 			 const struct pmu_probe_info *probe_table)
@@ -1025,25 +1050,12 @@ int arm_pmu_device_probe(struct platform_device *pdev,
 		goto out_free;
 	}
 
-
-	ret = cpu_pmu_init(pmu);
+	ret = armpmu_register(pmu);
 	if (ret)
 		goto out_free;
 
-	ret = perf_pmu_register(&pmu->pmu, pmu->name, -1);
-	if (ret)
-		goto out_destroy;
-
-	if (!__oprofile_cpu_pmu)
-		__oprofile_cpu_pmu = pmu;
-
-	pr_info("enabled with %s PMU driver, %d counters available\n",
-			pmu->name, pmu->num_events);
-
 	return 0;
 
-out_destroy:
-	cpu_pmu_destroy(pmu);
 out_free:
 	pr_info("%s: failed to register PMU devices!\n",
 		of_node_full_name(node));
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCHv2 05/16] drivers/perf: arm_pmu: simplify cpu_pmu_request_irqs()
  2017-04-06 18:29 [PATCHv2 00/16] arm_pmu: ACPI support Mark Rutland
                   ` (3 preceding siblings ...)
  2017-04-06 18:29 ` [PATCHv2 04/16] drivers/perf: arm_pmu: factor out pmu registration Mark Rutland
@ 2017-04-06 18:29 ` Mark Rutland
  2017-04-06 18:29 ` [PATCHv2 06/16] drivers/perf: arm_pmu: handle no platform_device Mark Rutland
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Mark Rutland @ 2017-04-06 18:29 UTC (permalink / raw)
  To: linux-arm-kernel

The ARM PMU framework code always uses armpmu_dispatch_irq as its common
IRQ handler. Passing this down from cpu_pmu_init() is somewhat
pointless, and gets in the way of refactoring.

This patch makes cpu_pmu_request_irqs() always use armpmu_dispatch_irq
as the handler when requesting IRQs, and removes the handler parameter
from its prototype.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 drivers/perf/arm_pmu.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 985bd08..183c6fd 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -543,10 +543,11 @@ static void cpu_pmu_free_irqs(struct arm_pmu *cpu_pmu)
 	}
 }
 
-static int cpu_pmu_request_irqs(struct arm_pmu *cpu_pmu, irq_handler_t handler)
+static int cpu_pmu_request_irqs(struct arm_pmu *cpu_pmu)
 {
 	int cpu, err;
 	struct pmu_hw_events __percpu *hw_events = cpu_pmu->hw_events;
+	const irq_handler_t handler = armpmu_dispatch_irq;
 
 	for_each_cpu(cpu, &cpu_pmu->supported_cpus) {
 		int irq = per_cpu(hw_events->irq, cpu);
@@ -735,7 +736,7 @@ static int cpu_pmu_init(struct arm_pmu *cpu_pmu)
 {
 	int err;
 
-	err = cpu_pmu_request_irqs(cpu_pmu, armpmu_dispatch_irq);
+	err = cpu_pmu_request_irqs(cpu_pmu);
 	if (err)
 		goto out;
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCHv2 06/16] drivers/perf: arm_pmu: handle no platform_device
  2017-04-06 18:29 [PATCHv2 00/16] arm_pmu: ACPI support Mark Rutland
                   ` (4 preceding siblings ...)
  2017-04-06 18:29 ` [PATCHv2 05/16] drivers/perf: arm_pmu: simplify cpu_pmu_request_irqs() Mark Rutland
@ 2017-04-06 18:29 ` Mark Rutland
  2017-04-06 18:29 ` [PATCHv2 07/16] drivers/perf: arm_pmu: rename irq request/free functions Mark Rutland
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Mark Rutland @ 2017-04-06 18:29 UTC (permalink / raw)
  To: linux-arm-kernel

In armpmu_dispatch_irq() we look at arm_pmu::plat_device to acquire
platdata, so that we can defer to platform-specific IRQ handling,
required on some 32-bit parts. With the advent of ACPI we won't always
have a platform_device, and so we must avoid trying to dereference
fields from it.

This patch fixes up armpmu_dispatch_irq() to avoid doing so, introducing
a new armpmu_get_platdata() helper.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 drivers/perf/arm_pmu.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 183c6fd..2442038 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -316,10 +316,16 @@ static void armpmu_start(struct perf_event *event, int flags)
 	return 0;
 }
 
+static struct arm_pmu_platdata *armpmu_get_platdata(struct arm_pmu *armpmu)
+{
+	struct platform_device *pdev = armpmu->plat_device;
+
+	return pdev ? dev_get_platdata(&pdev->dev) : NULL;
+}
+
 static irqreturn_t armpmu_dispatch_irq(int irq, void *dev)
 {
 	struct arm_pmu *armpmu;
-	struct platform_device *plat_device;
 	struct arm_pmu_platdata *plat;
 	int ret;
 	u64 start_clock, finish_clock;
@@ -331,8 +337,8 @@ static irqreturn_t armpmu_dispatch_irq(int irq, void *dev)
 	 * dereference.
 	 */
 	armpmu = *(void **)dev;
-	plat_device = armpmu->plat_device;
-	plat = dev_get_platdata(&plat_device->dev);
+
+	plat = armpmu_get_platdata(armpmu);
 
 	start_clock = sched_clock();
 	if (plat && plat->handle_irq)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCHv2 07/16] drivers/perf: arm_pmu: rename irq request/free functions
  2017-04-06 18:29 [PATCHv2 00/16] arm_pmu: ACPI support Mark Rutland
                   ` (5 preceding siblings ...)
  2017-04-06 18:29 ` [PATCHv2 06/16] drivers/perf: arm_pmu: handle no platform_device Mark Rutland
@ 2017-04-06 18:29 ` Mark Rutland
  2017-04-06 18:29 ` [PATCHv2 08/16] drivers/perf: arm_pmu: split cpu-local irq request/free Mark Rutland
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Mark Rutland @ 2017-04-06 18:29 UTC (permalink / raw)
  To: linux-arm-kernel

For historical reasons, portions of the arm_pmu code use a cpu_pmu_
prefix rather than an armpmu_ prefix. While a minor annoyance, this
hasn't been a problem thusfar.

However, to enable ACPI support, we'll need to expose a few things in
header files, and we should aim to keep those consistently namespaced.
In preparation for exporting our IRQ request/free functions, rename
these to have an armpmu_ prefix. For consistency, the 'cpu_pmu'
parameter is also renamed to 'armpmu'.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 drivers/perf/arm_pmu.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 2442038..3c4e97d 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -527,12 +527,12 @@ int perf_num_counters(void)
 }
 EXPORT_SYMBOL_GPL(perf_num_counters);
 
-static void cpu_pmu_free_irqs(struct arm_pmu *cpu_pmu)
+static void armpmu_free_irqs(struct arm_pmu *armpmu)
 {
 	int cpu;
-	struct pmu_hw_events __percpu *hw_events = cpu_pmu->hw_events;
+	struct pmu_hw_events __percpu *hw_events = armpmu->hw_events;
 
-	for_each_cpu(cpu, &cpu_pmu->supported_cpus) {
+	for_each_cpu(cpu, &armpmu->supported_cpus) {
 		int irq = per_cpu(hw_events->irq, cpu);
 		if (!irq)
 			continue;
@@ -542,20 +542,20 @@ static void cpu_pmu_free_irqs(struct arm_pmu *cpu_pmu)
 			break;
 		}
 
-		if (!cpumask_test_and_clear_cpu(cpu, &cpu_pmu->active_irqs))
+		if (!cpumask_test_and_clear_cpu(cpu, &armpmu->active_irqs))
 			continue;
 
 		free_irq(irq, per_cpu_ptr(&hw_events->percpu_pmu, cpu));
 	}
 }
 
-static int cpu_pmu_request_irqs(struct arm_pmu *cpu_pmu)
+static int armpmu_request_irqs(struct arm_pmu *armpmu)
 {
 	int cpu, err;
-	struct pmu_hw_events __percpu *hw_events = cpu_pmu->hw_events;
+	struct pmu_hw_events __percpu *hw_events = armpmu->hw_events;
 	const irq_handler_t handler = armpmu_dispatch_irq;
 
-	for_each_cpu(cpu, &cpu_pmu->supported_cpus) {
+	for_each_cpu(cpu, &armpmu->supported_cpus) {
 		int irq = per_cpu(hw_events->irq, cpu);
 		if (!irq)
 			continue;
@@ -580,7 +580,7 @@ static int cpu_pmu_request_irqs(struct arm_pmu *cpu_pmu)
 			return err;
 		}
 
-		cpumask_set_cpu(cpu, &cpu_pmu->active_irqs);
+		cpumask_set_cpu(cpu, &armpmu->active_irqs);
 	}
 
 	return 0;
@@ -742,7 +742,7 @@ static int cpu_pmu_init(struct arm_pmu *cpu_pmu)
 {
 	int err;
 
-	err = cpu_pmu_request_irqs(cpu_pmu);
+	err = armpmu_request_irqs(cpu_pmu);
 	if (err)
 		goto out;
 
@@ -761,7 +761,7 @@ static int cpu_pmu_init(struct arm_pmu *cpu_pmu)
 	cpuhp_state_remove_instance_nocalls(CPUHP_AP_PERF_ARM_STARTING,
 					    &cpu_pmu->node);
 out:
-	cpu_pmu_free_irqs(cpu_pmu);
+	armpmu_free_irqs(cpu_pmu);
 	return err;
 }
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCHv2 08/16] drivers/perf: arm_pmu: split cpu-local irq request/free
  2017-04-06 18:29 [PATCHv2 00/16] arm_pmu: ACPI support Mark Rutland
                   ` (6 preceding siblings ...)
  2017-04-06 18:29 ` [PATCHv2 07/16] drivers/perf: arm_pmu: rename irq request/free functions Mark Rutland
@ 2017-04-06 18:29 ` Mark Rutland
  2017-04-06 18:29 ` [PATCHv2 09/16] drivers/perf: arm_pmu: move irq request/free into probe Mark Rutland
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Mark Rutland @ 2017-04-06 18:29 UTC (permalink / raw)
  To: linux-arm-kernel

Currently we have functions to request/free all IRQs for a given PMU.
While this works today, this won't work for ACPI, where we don't know
the full set of IRQs up front, and need to request them separately.

To enable supporting ACPI, this patch splits out the cpu-local
request/free into new functions, allowing us to request/free individual
IRQs.

As this makes it possible/necessary to request a PPI once per cpu, an
additional check is added to detect mismatched PPIs. This shouldn't
matter for the DT / platform case, as we check this when parsing.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 drivers/perf/arm_pmu.c | 88 +++++++++++++++++++++++++++++---------------------
 1 file changed, 52 insertions(+), 36 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 3c4e97d..c09c379 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -527,65 +527,81 @@ int perf_num_counters(void)
 }
 EXPORT_SYMBOL_GPL(perf_num_counters);
 
-static void armpmu_free_irqs(struct arm_pmu *armpmu)
+static void armpmu_free_irq(struct arm_pmu *armpmu, int cpu)
 {
-	int cpu;
 	struct pmu_hw_events __percpu *hw_events = armpmu->hw_events;
+	int irq = per_cpu(hw_events->irq, cpu);
 
-	for_each_cpu(cpu, &armpmu->supported_cpus) {
-		int irq = per_cpu(hw_events->irq, cpu);
-		if (!irq)
-			continue;
+	if (!cpumask_test_and_clear_cpu(cpu, &armpmu->active_irqs))
+		return;
 
-		if (irq_is_percpu(irq)) {
-			free_percpu_irq(irq, &hw_events->percpu_pmu);
-			break;
-		}
+	if (irq_is_percpu(irq)) {
+		free_percpu_irq(irq, &hw_events->percpu_pmu);
+		cpumask_clear(&armpmu->active_irqs);
+		return;
+	}
 
-		if (!cpumask_test_and_clear_cpu(cpu, &armpmu->active_irqs))
-			continue;
+	free_irq(irq, per_cpu_ptr(&hw_events->percpu_pmu, cpu));
+}
 
-		free_irq(irq, per_cpu_ptr(&hw_events->percpu_pmu, cpu));
-	}
+static void armpmu_free_irqs(struct arm_pmu *armpmu)
+{
+	int cpu;
+
+	for_each_cpu(cpu, &armpmu->supported_cpus)
+		armpmu_free_irq(armpmu, cpu);
 }
 
-static int armpmu_request_irqs(struct arm_pmu *armpmu)
+static int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
 {
-	int cpu, err;
+	int err = 0;
 	struct pmu_hw_events __percpu *hw_events = armpmu->hw_events;
 	const irq_handler_t handler = armpmu_dispatch_irq;
+	int irq = per_cpu(hw_events->irq, cpu);
+	if (!irq)
+		return 0;
 
-	for_each_cpu(cpu, &armpmu->supported_cpus) {
-		int irq = per_cpu(hw_events->irq, cpu);
-		if (!irq)
-			continue;
+	if (irq_is_percpu(irq) && cpumask_empty(&armpmu->active_irqs)) {
+		err = request_percpu_irq(irq, handler, "arm-pmu",
+					 &hw_events->percpu_pmu);
+	} else if (irq_is_percpu(irq)) {
+		int other_cpu = cpumask_first(&armpmu->active_irqs);
+		int other_irq = per_cpu(hw_events->irq, other_cpu);
 
-		if (irq_is_percpu(irq)) {
-			err = request_percpu_irq(irq, handler, "arm-pmu",
-						 &hw_events->percpu_pmu);
-			if (err) {
-				pr_err("unable to request IRQ%d for ARM PMU counters\n",
-					irq);
-			}
-
-			return err;
+		if (irq != other_irq) {
+			pr_warn("mismatched PPIs detected.\n");
+			err = -EINVAL;
 		}
-
+	} else {
 		err = request_irq(irq, handler,
 				  IRQF_NOBALANCING | IRQF_NO_THREAD, "arm-pmu",
 				  per_cpu_ptr(&hw_events->percpu_pmu, cpu));
-		if (err) {
-			pr_err("unable to request IRQ%d for ARM PMU counters\n",
-				irq);
-			return err;
-		}
+	}
 
-		cpumask_set_cpu(cpu, &armpmu->active_irqs);
+	if (err) {
+		pr_err("unable to request IRQ%d for ARM PMU counters\n",
+			irq);
+		return err;
 	}
 
+	cpumask_set_cpu(cpu, &armpmu->active_irqs);
+
 	return 0;
 }
 
+static int armpmu_request_irqs(struct arm_pmu *armpmu)
+{
+	int cpu, err;
+
+	for_each_cpu(cpu, &armpmu->supported_cpus) {
+		err = armpmu_request_irq(armpmu, cpu);
+		if (err)
+			break;
+	}
+
+	return err;
+}
+
 static int armpmu_get_cpu_irq(struct arm_pmu *pmu, int cpu)
 {
 	struct pmu_hw_events __percpu *hw_events = pmu->hw_events;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCHv2 09/16] drivers/perf: arm_pmu: move irq request/free into probe
  2017-04-06 18:29 [PATCHv2 00/16] arm_pmu: ACPI support Mark Rutland
                   ` (7 preceding siblings ...)
  2017-04-06 18:29 ` [PATCHv2 08/16] drivers/perf: arm_pmu: split cpu-local irq request/free Mark Rutland
@ 2017-04-06 18:29 ` Mark Rutland
  2017-04-06 18:29 ` [PATCHv2 10/16] drivers/perf: arm_pmu: split out platform device probe logic Mark Rutland
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Mark Rutland @ 2017-04-06 18:29 UTC (permalink / raw)
  To: linux-arm-kernel

Currently we request (and potentially free) all IRQs for a given PMU in
cpu_pmu_init(). This works for platform/DT probing today, but it doesn't
fit ACPI well as we don't have all our affinity data up-front.

In preparation for ACPI support, fold the IRQ request/free into
arm_pmu_device_probe(), which will remain specific to platform/DT
probing.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 drivers/perf/arm_pmu.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index c09c379..f387d61 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -758,10 +758,6 @@ static int cpu_pmu_init(struct arm_pmu *cpu_pmu)
 {
 	int err;
 
-	err = armpmu_request_irqs(cpu_pmu);
-	if (err)
-		goto out;
-
 	err = cpuhp_state_add_instance(CPUHP_AP_PERF_ARM_STARTING,
 				       &cpu_pmu->node);
 	if (err)
@@ -777,7 +773,6 @@ static int cpu_pmu_init(struct arm_pmu *cpu_pmu)
 	cpuhp_state_remove_instance_nocalls(CPUHP_AP_PERF_ARM_STARTING,
 					    &cpu_pmu->node);
 out:
-	armpmu_free_irqs(cpu_pmu);
 	return err;
 }
 
@@ -1073,12 +1068,18 @@ int arm_pmu_device_probe(struct platform_device *pdev,
 		goto out_free;
 	}
 
+	ret = armpmu_request_irqs(pmu);
+	if (ret)
+		goto out_free_irqs;
+
 	ret = armpmu_register(pmu);
 	if (ret)
 		goto out_free;
 
 	return 0;
 
+out_free_irqs:
+	armpmu_free_irqs(pmu);
 out_free:
 	pr_info("%s: failed to register PMU devices!\n",
 		of_node_full_name(node));
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCHv2 10/16] drivers/perf: arm_pmu: split out platform device probe logic
  2017-04-06 18:29 [PATCHv2 00/16] arm_pmu: ACPI support Mark Rutland
                   ` (8 preceding siblings ...)
  2017-04-06 18:29 ` [PATCHv2 09/16] drivers/perf: arm_pmu: move irq request/free into probe Mark Rutland
@ 2017-04-06 18:29 ` Mark Rutland
  2017-04-06 18:29 ` [PATCHv2 11/16] arm64: add function to get a cpu's MADT GICC table Mark Rutland
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Mark Rutland @ 2017-04-06 18:29 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we've split the pdev and DT probing logic from the runtime
management, let's move the former into its own file. We gain a few lines
due to the copyright header and includes, but this should keep the logic
clearly separated, and paves the way for adding ACPI support in a
similar fashion.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 drivers/perf/Makefile           |   2 +-
 drivers/perf/arm_pmu.c          | 226 +-------------------------------------
 drivers/perf/arm_pmu_platform.c | 235 ++++++++++++++++++++++++++++++++++++++++
 include/linux/perf/arm_pmu.h    |   7 ++
 4 files changed, 247 insertions(+), 223 deletions(-)
 create mode 100644 drivers/perf/arm_pmu_platform.c

diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile
index ef0c6b2..925cd39 100644
--- a/drivers/perf/Makefile
+++ b/drivers/perf/Makefile
@@ -1,4 +1,4 @@
-obj-$(CONFIG_ARM_PMU) += arm_pmu.o
+obj-$(CONFIG_ARM_PMU) += arm_pmu.o arm_pmu_platform.o
 obj-$(CONFIG_QCOM_L2_PMU)	+= qcom_l2_pmu.o
 obj-$(CONFIG_QCOM_L3_PMU) += qcom_l3_pmu.o
 obj-$(CONFIG_XGENE_PMU) += xgene_pmu.o
diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index f387d61..b3bedfa 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -16,7 +16,6 @@
 #include <linux/cpu_pm.h>
 #include <linux/export.h>
 #include <linux/kernel.h>
-#include <linux/of_device.h>
 #include <linux/perf/arm_pmu.h>
 #include <linux/platform_device.h>
 #include <linux/slab.h>
@@ -25,7 +24,6 @@
 #include <linux/irq.h>
 #include <linux/irqdesc.h>
 
-#include <asm/cputype.h>
 #include <asm/irq_regs.h>
 
 static int
@@ -544,7 +542,7 @@ static void armpmu_free_irq(struct arm_pmu *armpmu, int cpu)
 	free_irq(irq, per_cpu_ptr(&hw_events->percpu_pmu, cpu));
 }
 
-static void armpmu_free_irqs(struct arm_pmu *armpmu)
+void armpmu_free_irqs(struct arm_pmu *armpmu)
 {
 	int cpu;
 
@@ -589,7 +587,7 @@ static int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
 	return 0;
 }
 
-static int armpmu_request_irqs(struct arm_pmu *armpmu)
+int armpmu_request_irqs(struct arm_pmu *armpmu)
 {
 	int cpu, err;
 
@@ -783,161 +781,7 @@ static void cpu_pmu_destroy(struct arm_pmu *cpu_pmu)
 					    &cpu_pmu->node);
 }
 
-/*
- * CPU PMU identification and probing.
- */
-static int probe_current_pmu(struct arm_pmu *pmu,
-			     const struct pmu_probe_info *info)
-{
-	int cpu = get_cpu();
-	unsigned int cpuid = read_cpuid_id();
-	int ret = -ENODEV;
-
-	pr_info("probing PMU on CPU %d\n", cpu);
-
-	for (; info->init != NULL; info++) {
-		if ((cpuid & info->mask) != info->cpuid)
-			continue;
-		ret = info->init(pmu);
-		break;
-	}
-
-	put_cpu();
-	return ret;
-}
-
-static int pmu_parse_percpu_irq(struct arm_pmu *pmu, int irq)
-{
-	int cpu, ret;
-	struct pmu_hw_events __percpu *hw_events = pmu->hw_events;
-
-	ret = irq_get_percpu_devid_partition(irq, &pmu->supported_cpus);
-	if (ret)
-		return ret;
-
-	for_each_cpu(cpu, &pmu->supported_cpus)
-		per_cpu(hw_events->irq, cpu) = irq;
-
-	return 0;
-}
-
-static bool pmu_has_irq_affinity(struct device_node *node)
-{
-	return !!of_find_property(node, "interrupt-affinity", NULL);
-}
-
-static int pmu_parse_irq_affinity(struct device_node *node, int i)
-{
-	struct device_node *dn;
-	int cpu;
-
-	/*
-	 * If we don't have an interrupt-affinity property, we guess irq
-	 * affinity matches our logical CPU order, as we used to assume.
-	 * This is fragile, so we'll warn in pmu_parse_irqs().
-	 */
-	if (!pmu_has_irq_affinity(node))
-		return i;
-
-	dn = of_parse_phandle(node, "interrupt-affinity", i);
-	if (!dn) {
-		pr_warn("failed to parse interrupt-affinity[%d] for %s\n",
-			i, node->name);
-		return -EINVAL;
-	}
-
-	/* Now look up the logical CPU number */
-	for_each_possible_cpu(cpu) {
-		struct device_node *cpu_dn;
-
-		cpu_dn = of_cpu_device_node_get(cpu);
-		of_node_put(cpu_dn);
-
-		if (dn == cpu_dn)
-			break;
-	}
-
-	if (cpu >= nr_cpu_ids) {
-		pr_warn("failed to find logical CPU for %s\n", dn->name);
-	}
-
-	of_node_put(dn);
-
-	return cpu;
-}
-
-static int pmu_parse_irqs(struct arm_pmu *pmu)
-{
-	int i = 0, irqs;
-	struct platform_device *pdev = pmu->plat_device;
-	struct pmu_hw_events __percpu *hw_events = pmu->hw_events;
-
-	irqs = platform_irq_count(pdev);
-	if (irqs < 0) {
-		pr_err("unable to count PMU IRQs\n");
-		return irqs;
-	}
-
-	/*
-	 * In this case we have no idea which CPUs are covered by the PMU.
-	 * To match our prior behaviour, we assume all CPUs in this case.
-	 */
-	if (irqs == 0) {
-		pr_warn("no irqs for PMU, sampling events not supported\n");
-		pmu->pmu.capabilities |= PERF_PMU_CAP_NO_INTERRUPT;
-		cpumask_setall(&pmu->supported_cpus);
-		return 0;
-	}
-
-	if (irqs == 1) {
-		int irq = platform_get_irq(pdev, 0);
-		if (irq && irq_is_percpu(irq))
-			return pmu_parse_percpu_irq(pmu, irq);
-	}
-
-	if (!pmu_has_irq_affinity(pdev->dev.of_node)) {
-		pr_warn("no interrupt-affinity property for %s, guessing.\n",
-			of_node_full_name(pdev->dev.of_node));
-	}
-
-	/*
-	 * Some platforms have all PMU IRQs OR'd into a single IRQ, with a
-	 * special platdata function that attempts to demux them.
-	 */
-	if (dev_get_platdata(&pdev->dev))
-		cpumask_setall(&pmu->supported_cpus);
-
-	for (i = 0; i < irqs; i++) {
-		int cpu, irq;
-
-		irq = platform_get_irq(pdev, i);
-		if (WARN_ON(irq <= 0))
-			continue;
-
-		if (irq_is_percpu(irq)) {
-			pr_warn("multiple PPIs or mismatched SPI/PPI detected\n");
-			return -EINVAL;
-		}
-
-		cpu = pmu_parse_irq_affinity(pdev->dev.of_node, i);
-		if (cpu < 0)
-			return cpu;
-		if (cpu >= nr_cpu_ids)
-			continue;
-
-		if (per_cpu(hw_events->irq, cpu)) {
-			pr_warn("multiple PMU IRQs for the same CPU detected\n");
-			return -EINVAL;
-		}
-
-		per_cpu(hw_events->irq, cpu) = irq;
-		cpumask_set_cpu(cpu, &pmu->supported_cpus);
-	}
-
-	return 0;
-}
-
-static struct arm_pmu *armpmu_alloc(void)
+struct arm_pmu *armpmu_alloc(void)
 {
 	struct arm_pmu *pmu;
 	int cpu;
@@ -994,7 +838,7 @@ static struct arm_pmu *armpmu_alloc(void)
 	return NULL;
 }
 
-static void armpmu_free(struct arm_pmu *pmu)
+void armpmu_free(struct arm_pmu *pmu)
 {
 	free_percpu(pmu->hw_events);
 	kfree(pmu);
@@ -1025,68 +869,6 @@ int armpmu_register(struct arm_pmu *pmu)
 	return ret;
 }
 
-int arm_pmu_device_probe(struct platform_device *pdev,
-			 const struct of_device_id *of_table,
-			 const struct pmu_probe_info *probe_table)
-{
-	const struct of_device_id *of_id;
-	armpmu_init_fn init_fn;
-	struct device_node *node = pdev->dev.of_node;
-	struct arm_pmu *pmu;
-	int ret = -ENODEV;
-
-	pmu = armpmu_alloc();
-	if (!pmu)
-		return -ENOMEM;
-
-	pmu->plat_device = pdev;
-
-	ret = pmu_parse_irqs(pmu);
-	if (ret)
-		goto out_free;
-
-	if (node && (of_id = of_match_node(of_table, pdev->dev.of_node))) {
-		init_fn = of_id->data;
-
-		pmu->secure_access = of_property_read_bool(pdev->dev.of_node,
-							   "secure-reg-access");
-
-		/* arm64 systems boot only as non-secure */
-		if (IS_ENABLED(CONFIG_ARM64) && pmu->secure_access) {
-			pr_warn("ignoring \"secure-reg-access\" property for arm64\n");
-			pmu->secure_access = false;
-		}
-
-		ret = init_fn(pmu);
-	} else if (probe_table) {
-		cpumask_setall(&pmu->supported_cpus);
-		ret = probe_current_pmu(pmu, probe_table);
-	}
-
-	if (ret) {
-		pr_info("%s: failed to probe PMU!\n", of_node_full_name(node));
-		goto out_free;
-	}
-
-	ret = armpmu_request_irqs(pmu);
-	if (ret)
-		goto out_free_irqs;
-
-	ret = armpmu_register(pmu);
-	if (ret)
-		goto out_free;
-
-	return 0;
-
-out_free_irqs:
-	armpmu_free_irqs(pmu);
-out_free:
-	pr_info("%s: failed to register PMU devices!\n",
-		of_node_full_name(node));
-	armpmu_free(pmu);
-	return ret;
-}
-
 static int arm_pmu_hp_init(void)
 {
 	int ret;
diff --git a/drivers/perf/arm_pmu_platform.c b/drivers/perf/arm_pmu_platform.c
new file mode 100644
index 0000000..d725adc
--- /dev/null
+++ b/drivers/perf/arm_pmu_platform.c
@@ -0,0 +1,235 @@
+/*
+ * platform_device probing code for ARM performance counters.
+ *
+ * Copyright (C) 2009 picoChip Designs, Ltd., Jamie Iles
+ * Copyright (C) 2010 ARM Ltd., Will Deacon <will.deacon@arm.com>
+ */
+#define pr_fmt(fmt) "hw perfevents: " fmt
+
+#include <linux/bug.h>
+#include <linux/cpumask.h>
+#include <linux/device.h>
+#include <linux/errno.h>
+#include <linux/irq.h>
+#include <linux/irqdesc.h>
+#include <linux/kconfig.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/percpu.h>
+#include <linux/perf/arm_pmu.h>
+#include <linux/platform_device.h>
+#include <linux/printk.h>
+#include <linux/smp.h>
+
+static int probe_current_pmu(struct arm_pmu *pmu,
+			     const struct pmu_probe_info *info)
+{
+	int cpu = get_cpu();
+	unsigned int cpuid = read_cpuid_id();
+	int ret = -ENODEV;
+
+	pr_info("probing PMU on CPU %d\n", cpu);
+
+	for (; info->init != NULL; info++) {
+		if ((cpuid & info->mask) != info->cpuid)
+			continue;
+		ret = info->init(pmu);
+		break;
+	}
+
+	put_cpu();
+	return ret;
+}
+
+static int pmu_parse_percpu_irq(struct arm_pmu *pmu, int irq)
+{
+	int cpu, ret;
+	struct pmu_hw_events __percpu *hw_events = pmu->hw_events;
+
+	ret = irq_get_percpu_devid_partition(irq, &pmu->supported_cpus);
+	if (ret)
+		return ret;
+
+	for_each_cpu(cpu, &pmu->supported_cpus)
+		per_cpu(hw_events->irq, cpu) = irq;
+
+	return 0;
+}
+
+static bool pmu_has_irq_affinity(struct device_node *node)
+{
+	return !!of_find_property(node, "interrupt-affinity", NULL);
+}
+
+static int pmu_parse_irq_affinity(struct device_node *node, int i)
+{
+	struct device_node *dn;
+	int cpu;
+
+	/*
+	 * If we don't have an interrupt-affinity property, we guess irq
+	 * affinity matches our logical CPU order, as we used to assume.
+	 * This is fragile, so we'll warn in pmu_parse_irqs().
+	 */
+	if (!pmu_has_irq_affinity(node))
+		return i;
+
+	dn = of_parse_phandle(node, "interrupt-affinity", i);
+	if (!dn) {
+		pr_warn("failed to parse interrupt-affinity[%d] for %s\n",
+			i, node->name);
+		return -EINVAL;
+	}
+
+	/* Now look up the logical CPU number */
+	for_each_possible_cpu(cpu) {
+		struct device_node *cpu_dn;
+
+		cpu_dn = of_cpu_device_node_get(cpu);
+		of_node_put(cpu_dn);
+
+		if (dn == cpu_dn)
+			break;
+	}
+
+	if (cpu >= nr_cpu_ids) {
+		pr_warn("failed to find logical CPU for %s\n", dn->name);
+	}
+
+	of_node_put(dn);
+
+	return cpu;
+}
+
+static int pmu_parse_irqs(struct arm_pmu *pmu)
+{
+	int i = 0, nr_irqs;
+	struct platform_device *pdev = pmu->plat_device;
+	struct pmu_hw_events __percpu *hw_events = pmu->hw_events;
+
+	nr_irqs = platform_irq_count(pdev);
+	if (nr_irqs < 0) {
+		pr_err("unable to count PMU IRQs\n");
+		return nr_irqs;
+	}
+
+	/*
+	 * In this case we have no idea which CPUs are covered by the PMU.
+	 * To match our prior behaviour, we assume all CPUs in this case.
+	 */
+	if (nr_irqs == 0) {
+		pr_warn("no irqs for PMU, sampling events not supported\n");
+		pmu->pmu.capabilities |= PERF_PMU_CAP_NO_INTERRUPT;
+		cpumask_setall(&pmu->supported_cpus);
+		return 0;
+	}
+
+	if (nr_irqs == 1) {
+		int irq = platform_get_irq(pdev, 0);
+		if (irq && irq_is_percpu(irq))
+			return pmu_parse_percpu_irq(pmu, irq);
+	}
+
+	if (!pmu_has_irq_affinity(pdev->dev.of_node)) {
+		pr_warn("no interrupt-affinity property for %s, guessing.\n",
+			of_node_full_name(pdev->dev.of_node));
+	}
+
+	/*
+	 * Some platforms have all PMU IRQs OR'd into a single IRQ, with a
+	 * special platdata function that attempts to demux them.
+	 */
+	if (dev_get_platdata(&pdev->dev))
+		cpumask_setall(&pmu->supported_cpus);
+
+	for (i = 0; i < nr_irqs; i++) {
+		int cpu, irq;
+
+		irq = platform_get_irq(pdev, i);
+		if (WARN_ON(irq <= 0))
+			continue;
+
+		if (irq_is_percpu(irq)) {
+			pr_warn("multiple PPIs or mismatched SPI/PPI detected\n");
+			return -EINVAL;
+		}
+
+		cpu = pmu_parse_irq_affinity(pdev->dev.of_node, i);
+		if (cpu < 0)
+			return cpu;
+		if (cpu >= nr_cpu_ids)
+			continue;
+
+		if (per_cpu(hw_events->irq, cpu)) {
+			pr_warn("multiple PMU IRQs for the same CPU detected\n");
+			return -EINVAL;
+		}
+
+		per_cpu(hw_events->irq, cpu) = irq;
+		cpumask_set_cpu(cpu, &pmu->supported_cpus);
+	}
+
+	return 0;
+}
+
+int arm_pmu_device_probe(struct platform_device *pdev,
+			 const struct of_device_id *of_table,
+			 const struct pmu_probe_info *probe_table)
+{
+	const struct of_device_id *of_id;
+	armpmu_init_fn init_fn;
+	struct device_node *node = pdev->dev.of_node;
+	struct arm_pmu *pmu;
+	int ret = -ENODEV;
+
+	pmu = armpmu_alloc();
+	if (!pmu)
+		return -ENOMEM;
+
+	pmu->plat_device = pdev;
+
+	ret = pmu_parse_irqs(pmu);
+	if (ret)
+		goto out_free;
+
+	if (node && (of_id = of_match_node(of_table, pdev->dev.of_node))) {
+		init_fn = of_id->data;
+
+		pmu->secure_access = of_property_read_bool(pdev->dev.of_node,
+							   "secure-reg-access");
+
+		/* arm64 systems boot only as non-secure */
+		if (IS_ENABLED(CONFIG_ARM64) && pmu->secure_access) {
+			pr_warn("ignoring \"secure-reg-access\" property for arm64\n");
+			pmu->secure_access = false;
+		}
+
+		ret = init_fn(pmu);
+	} else if (probe_table) {
+		cpumask_setall(&pmu->supported_cpus);
+		ret = probe_current_pmu(pmu, probe_table);
+	}
+
+	if (ret) {
+		pr_info("%s: failed to probe PMU!\n", of_node_full_name(node));
+		goto out_free;
+	}
+
+	ret = armpmu_request_irqs(pmu);
+	if (ret)
+		goto out_free_irqs;
+
+	ret = armpmu_register(pmu);
+	if (ret)
+		goto out_free;
+
+	return 0;
+
+out_free_irqs:
+	armpmu_free_irqs(pmu);
+out_free:
+	pr_info("%s: failed to register PMU devices!\n",
+		of_node_full_name(node));
+	armpmu_free(pmu);
+	return ret;
+}
diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index 4249914..25556eb 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -159,6 +159,13 @@ int arm_pmu_device_probe(struct platform_device *pdev,
 			 const struct of_device_id *of_table,
 			 const struct pmu_probe_info *probe_table);
 
+/* Internal functions only for core arm_pmu code */
+struct arm_pmu *armpmu_alloc(void);
+void armpmu_free(struct arm_pmu *pmu);
+int armpmu_register(struct arm_pmu *pmu);
+int armpmu_request_irqs(struct arm_pmu *armpmu);
+void armpmu_free_irqs(struct arm_pmu *armpmu);
+
 #define ARMV8_PMU_PDEV_NAME "armv8-pmu"
 
 #endif /* CONFIG_ARM_PMU */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCHv2 11/16] arm64: add function to get a cpu's MADT GICC table
  2017-04-06 18:29 [PATCHv2 00/16] arm_pmu: ACPI support Mark Rutland
                   ` (9 preceding siblings ...)
  2017-04-06 18:29 ` [PATCHv2 10/16] drivers/perf: arm_pmu: split out platform device probe logic Mark Rutland
@ 2017-04-06 18:29 ` Mark Rutland
  2017-04-06 18:29 ` [PATCHv2 12/16] arm64: parking: kill acpi_set_mailbox_entry() Mark Rutland
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Mark Rutland @ 2017-04-06 18:29 UTC (permalink / raw)
  To: linux-arm-kernel

Currently the ACPI parking protocol code needs to parse each CPU's MADT
GICC table to extract the mailbox address and so on. Each time we parse
a GICC table, we call back to the parking protocol code to parse it.

This has been fine so far, but we're about to have more code that needs
to extract data from the GICC tables, and adding a callback for each
user is going to get unwieldy.

Instead, this patch ensures that we stash a copy of each CPU's GICC
table at boot time, such that anything needing to parse it can later
request it. This will allow for other parsers of GICC, and for
simplification to the ACPI parking protocol code. Note that we must
store a copy, rather than a pointer, since the core ACPI code
temporarily maps/unmaps tables while iterating over them.

Since we parse the MADT before we know how many CPUs we have (and hence
before we setup the percpu areas), we must use an NR_CPUS sized array.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/acpi.h |  2 ++
 arch/arm64/kernel/smp.c       | 10 ++++++++++
 2 files changed, 12 insertions(+)

diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
index c1976c0..0e99978 100644
--- a/arch/arm64/include/asm/acpi.h
+++ b/arch/arm64/include/asm/acpi.h
@@ -85,6 +85,8 @@ static inline bool acpi_has_cpu_in_madt(void)
 	return true;
 }
 
+struct acpi_madt_generic_interrupt *acpi_cpu_get_madt_gicc(int cpu);
+
 static inline void arch_fix_phys_package_id(int num, u32 slot) { }
 void __init acpi_init_cpus(void);
 
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index ef1caae..390c277 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -518,6 +518,13 @@ static int __init smp_cpu_setup(int cpu)
 static unsigned int cpu_count = 1;
 
 #ifdef CONFIG_ACPI
+static struct acpi_madt_generic_interrupt cpu_madt_gicc[NR_CPUS];
+
+struct acpi_madt_generic_interrupt *acpi_cpu_get_madt_gicc(int cpu)
+{
+	return &cpu_madt_gicc[cpu];
+}
+
 /*
  * acpi_map_gic_cpu_interface - parse processor MADT entry
  *
@@ -552,6 +559,7 @@ static int __init smp_cpu_setup(int cpu)
 			return;
 		}
 		bootcpu_valid = true;
+		cpu_madt_gicc[0] = *processor;
 		early_map_cpu_to_node(0, acpi_numa_get_nid(0, hwid));
 		return;
 	}
@@ -562,6 +570,8 @@ static int __init smp_cpu_setup(int cpu)
 	/* map the logical cpu id to cpu MPIDR */
 	cpu_logical_map(cpu_count) = hwid;
 
+	cpu_madt_gicc[cpu_count] = *processor;
+
 	/*
 	 * Set-up the ACPI parking protocol cpu entries
 	 * while initializing the cpu_logical_map to
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCHv2 12/16] arm64: parking: kill acpi_set_mailbox_entry()
  2017-04-06 18:29 [PATCHv2 00/16] arm_pmu: ACPI support Mark Rutland
                   ` (10 preceding siblings ...)
  2017-04-06 18:29 ` [PATCHv2 11/16] arm64: add function to get a cpu's MADT GICC table Mark Rutland
@ 2017-04-06 18:29 ` Mark Rutland
  2017-04-06 18:29 ` [PATCHv2 13/16] arm64: parking: fix type endianness Mark Rutland
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Mark Rutland @ 2017-04-06 18:29 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we have acpi_cpu_get_madt_gicc(), we can use values from the
MADT GICC table directly, and no longer need to stash a copy in our own
datastructure.

This allows us to kill off the acpi_set_mailbox_entry() callback, too.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/acpi.h             |  5 ----
 arch/arm64/kernel/acpi_parking_protocol.c | 40 ++++++++++---------------------
 arch/arm64/kernel/smp.c                   | 11 ---------
 3 files changed, 13 insertions(+), 43 deletions(-)

diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
index 0e99978..ebf88e3 100644
--- a/arch/arm64/include/asm/acpi.h
+++ b/arch/arm64/include/asm/acpi.h
@@ -96,13 +96,8 @@ static inline void acpi_init_cpus(void) { }
 
 #ifdef CONFIG_ARM64_ACPI_PARKING_PROTOCOL
 bool acpi_parking_protocol_valid(int cpu);
-void __init
-acpi_set_mailbox_entry(int cpu, struct acpi_madt_generic_interrupt *processor);
 #else
 static inline bool acpi_parking_protocol_valid(int cpu) { return false; }
-static inline void
-acpi_set_mailbox_entry(int cpu, struct acpi_madt_generic_interrupt *processor)
-{}
 #endif
 
 static inline const char *acpi_get_enable_method(int cpu)
diff --git a/arch/arm64/kernel/acpi_parking_protocol.c b/arch/arm64/kernel/acpi_parking_protocol.c
index 1f5655c..e2c7399 100644
--- a/arch/arm64/kernel/acpi_parking_protocol.c
+++ b/arch/arm64/kernel/acpi_parking_protocol.c
@@ -28,36 +28,21 @@ struct parking_protocol_mailbox {
 	__le64 entry_point;
 };
 
-struct cpu_mailbox_entry {
-	struct parking_protocol_mailbox __iomem *mailbox;
-	phys_addr_t mailbox_addr;
-	u8 version;
-	u8 gic_cpu_id;
-};
-
-static struct cpu_mailbox_entry cpu_mailbox_entries[NR_CPUS];
-
-void __init acpi_set_mailbox_entry(int cpu,
-				   struct acpi_madt_generic_interrupt *p)
-{
-	struct cpu_mailbox_entry *cpu_entry = &cpu_mailbox_entries[cpu];
-
-	cpu_entry->mailbox_addr = p->parked_address;
-	cpu_entry->version = p->parking_version;
-	cpu_entry->gic_cpu_id = p->cpu_interface_number;
-}
+static DEFINE_PER_CPU(struct parking_protocol_mailbox __iomem * , mailboxes);
 
 bool acpi_parking_protocol_valid(int cpu)
 {
-	struct cpu_mailbox_entry *cpu_entry = &cpu_mailbox_entries[cpu];
+	struct acpi_madt_generic_interrupt *p = acpi_cpu_get_madt_gicc(cpu);
 
-	return cpu_entry->mailbox_addr && cpu_entry->version;
+	return p->parked_address && p->parking_version;
 }
 
 static int acpi_parking_protocol_cpu_init(unsigned int cpu)
 {
+	struct acpi_madt_generic_interrupt *p = acpi_cpu_get_madt_gicc(cpu);
+
 	pr_debug("%s: ACPI parked addr=%llx\n", __func__,
-		  cpu_mailbox_entries[cpu].mailbox_addr);
+		 p->parked_address);
 
 	return 0;
 }
@@ -69,7 +54,7 @@ static int acpi_parking_protocol_cpu_prepare(unsigned int cpu)
 
 static int acpi_parking_protocol_cpu_boot(unsigned int cpu)
 {
-	struct cpu_mailbox_entry *cpu_entry = &cpu_mailbox_entries[cpu];
+	struct acpi_madt_generic_interrupt *p = acpi_cpu_get_madt_gicc(cpu);
 	struct parking_protocol_mailbox __iomem *mailbox;
 	__le32 cpu_id;
 
@@ -84,7 +69,7 @@ static int acpi_parking_protocol_cpu_boot(unsigned int cpu)
 	 * by the kernel (it clashes with the linear mapping attributes
 	 * specifications).
 	 */
-	mailbox = ioremap(cpu_entry->mailbox_addr, sizeof(*mailbox));
+	mailbox = ioremap(p->parked_address, sizeof(*mailbox));
 	if (!mailbox)
 		return -EIO;
 
@@ -102,7 +87,7 @@ static int acpi_parking_protocol_cpu_boot(unsigned int cpu)
 	 * stash the mailbox address mapping to use it for further FW
 	 * checks in the postboot method
 	 */
-	cpu_entry->mailbox = mailbox;
+	per_cpu(mailboxes, cpu) = mailbox;
 
 	/*
 	 * We write the entry point and cpu id as LE regardless of the
@@ -111,7 +96,7 @@ static int acpi_parking_protocol_cpu_boot(unsigned int cpu)
 	 * Boot-Loader's endianness before jumping.
 	 */
 	writeq_relaxed(__pa_symbol(secondary_entry), &mailbox->entry_point);
-	writel_relaxed(cpu_entry->gic_cpu_id, &mailbox->cpu_id);
+	writel_relaxed(p->cpu_interface_number, &mailbox->cpu_id);
 
 	arch_send_wakeup_ipi_mask(cpumask_of(cpu));
 
@@ -121,10 +106,11 @@ static int acpi_parking_protocol_cpu_boot(unsigned int cpu)
 static void acpi_parking_protocol_cpu_postboot(void)
 {
 	int cpu = smp_processor_id();
-	struct cpu_mailbox_entry *cpu_entry = &cpu_mailbox_entries[cpu];
-	struct parking_protocol_mailbox __iomem *mailbox = cpu_entry->mailbox;
+	struct parking_protocol_mailbox __iomem *mailbox;
 	__le64 entry_point;
 
+	mailbox = per_cpu(mailboxes, cpu);
+
 	entry_point = readl_relaxed(&mailbox->entry_point);
 	/*
 	 * Check if firmware has cleared the entry_point as expected
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 390c277..b37dd83 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -572,17 +572,6 @@ struct acpi_madt_generic_interrupt *acpi_cpu_get_madt_gicc(int cpu)
 
 	cpu_madt_gicc[cpu_count] = *processor;
 
-	/*
-	 * Set-up the ACPI parking protocol cpu entries
-	 * while initializing the cpu_logical_map to
-	 * avoid parsing MADT entries multiple times for
-	 * nothing (ie a valid cpu_logical_map entry should
-	 * contain a valid parking protocol data set to
-	 * initialize the cpu if the parking protocol is
-	 * the only available enable method).
-	 */
-	acpi_set_mailbox_entry(cpu_count, processor);
-
 	early_map_cpu_to_node(cpu_count, acpi_numa_get_nid(cpu_count, hwid));
 
 	cpu_count++;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCHv2 13/16] arm64: parking: fix type endianness
  2017-04-06 18:29 [PATCHv2 00/16] arm_pmu: ACPI support Mark Rutland
                   ` (11 preceding siblings ...)
  2017-04-06 18:29 ` [PATCHv2 12/16] arm64: parking: kill acpi_set_mailbox_entry() Mark Rutland
@ 2017-04-06 18:29 ` Mark Rutland
  2017-04-06 18:29 ` [PATCHv2 14/16] drivers/perf: arm_pmu: add ACPI framework Mark Rutland
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Mark Rutland @ 2017-04-06 18:29 UTC (permalink / raw)
  To: linux-arm-kernel

The read*{,_relaxed}() return a value of native CPU endianness,
swizzling the LE value they read from an __iomem pointer.

Thus it isn't correct to treat their return values as __le32 or __le64,
and we should use u32 and u64 respectively instead.

This makes sparse much happier, but should have no functional impact.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/acpi_parking_protocol.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/acpi_parking_protocol.c b/arch/arm64/kernel/acpi_parking_protocol.c
index e2c7399..b878b15 100644
--- a/arch/arm64/kernel/acpi_parking_protocol.c
+++ b/arch/arm64/kernel/acpi_parking_protocol.c
@@ -56,7 +56,7 @@ static int acpi_parking_protocol_cpu_boot(unsigned int cpu)
 {
 	struct acpi_madt_generic_interrupt *p = acpi_cpu_get_madt_gicc(cpu);
 	struct parking_protocol_mailbox __iomem *mailbox;
-	__le32 cpu_id;
+	u32 cpu_id;
 
 	/*
 	 * Map mailbox memory with attribute device nGnRE (ie ioremap -
@@ -107,7 +107,7 @@ static void acpi_parking_protocol_cpu_postboot(void)
 {
 	int cpu = smp_processor_id();
 	struct parking_protocol_mailbox __iomem *mailbox;
-	__le64 entry_point;
+	u64 entry_point;
 
 	mailbox = per_cpu(mailboxes, cpu);
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCHv2 14/16] drivers/perf: arm_pmu: add ACPI framework
  2017-04-06 18:29 [PATCHv2 00/16] arm_pmu: ACPI support Mark Rutland
                   ` (12 preceding siblings ...)
  2017-04-06 18:29 ` [PATCHv2 13/16] arm64: parking: fix type endianness Mark Rutland
@ 2017-04-06 18:29 ` Mark Rutland
  2017-04-07 14:27   ` Will Deacon
  2017-04-06 18:29 ` [PATCHv2 15/16] arm64: pmuv3: handle !PMUv3 when probing Mark Rutland
                   ` (2 subsequent siblings)
  16 siblings, 1 reply; 25+ messages in thread
From: Mark Rutland @ 2017-04-06 18:29 UTC (permalink / raw)
  To: linux-arm-kernel

This patch adds framework code to handle parsing PMU data out of the
MADT, sanity checking this, and managing the association of CPUs (and
their interrupts) with appropriate logical PMUs.

For the time being, we expect that only one PMU driver (PMUv3) will make
use of this, and we simply pass in a single probe function.

This is based on an earlier patch from Jeremy Linton.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 drivers/perf/Kconfig         |   4 +
 drivers/perf/Makefile        |   1 +
 drivers/perf/arm_pmu.c       |   4 +-
 drivers/perf/arm_pmu_acpi.c  | 240 +++++++++++++++++++++++++++++++++++++++++++
 include/linux/cpuhotplug.h   |   1 +
 include/linux/perf/arm_pmu.h |  11 ++
 6 files changed, 259 insertions(+), 2 deletions(-)
 create mode 100644 drivers/perf/arm_pmu_acpi.c

diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig
index c436e0d..aa587ed 100644
--- a/drivers/perf/Kconfig
+++ b/drivers/perf/Kconfig
@@ -12,6 +12,10 @@ config ARM_PMU
 	  Say y if you want to use CPU performance monitors on ARM-based
 	  systems.
 
+config ARM_PMU_ACPI
+	depends on ARM_PMU && ACPI
+	def_bool y
+
 config QCOM_L2_PMU
 	bool "Qualcomm Technologies L2-cache PMU"
 	depends on ARCH_QCOM && ARM64 && PERF_EVENTS && ACPI
diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile
index 925cd39..6420bd4 100644
--- a/drivers/perf/Makefile
+++ b/drivers/perf/Makefile
@@ -1,4 +1,5 @@
 obj-$(CONFIG_ARM_PMU) += arm_pmu.o arm_pmu_platform.o
+obj-$(CONFIG_ARM_PMU_ACPI) += arm_pmu_acpi.o
 obj-$(CONFIG_QCOM_L2_PMU)	+= qcom_l2_pmu.o
 obj-$(CONFIG_QCOM_L3_PMU) += qcom_l3_pmu.o
 obj-$(CONFIG_XGENE_PMU) += xgene_pmu.o
diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index b3bedfa..dc459eb 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -525,7 +525,7 @@ int perf_num_counters(void)
 }
 EXPORT_SYMBOL_GPL(perf_num_counters);
 
-static void armpmu_free_irq(struct arm_pmu *armpmu, int cpu)
+void armpmu_free_irq(struct arm_pmu *armpmu, int cpu)
 {
 	struct pmu_hw_events __percpu *hw_events = armpmu->hw_events;
 	int irq = per_cpu(hw_events->irq, cpu);
@@ -550,7 +550,7 @@ void armpmu_free_irqs(struct arm_pmu *armpmu)
 		armpmu_free_irq(armpmu, cpu);
 }
 
-static int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
+int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
 {
 	int err = 0;
 	struct pmu_hw_events __percpu *hw_events = armpmu->hw_events;
diff --git a/drivers/perf/arm_pmu_acpi.c b/drivers/perf/arm_pmu_acpi.c
new file mode 100644
index 0000000..5cacd7b
--- /dev/null
+++ b/drivers/perf/arm_pmu_acpi.c
@@ -0,0 +1,240 @@
+/*
+ * ACPI probing code for ARM performance counters.
+ *
+ * Copyright (C) 2017 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/acpi.h>
+#include <linux/cpumask.h>
+#include <linux/init.h>
+#include <linux/percpu.h>
+#include <linux/perf/arm_pmu.h>
+
+#include <asm/cputype.h>
+
+static DEFINE_PER_CPU(struct arm_pmu *, probed_pmus);
+static DEFINE_PER_CPU(int, pmu_irqs);
+
+static int arm_pmu_acpi_parse_irq(int cpu)
+{
+	struct acpi_madt_generic_interrupt *gicc;
+	int gsi, trigger;
+
+	gicc = acpi_cpu_get_madt_gicc(cpu);
+	if (WARN_ON(!gicc))
+		return -EINVAL;
+
+	gsi = gicc->performance_interrupt;
+	if (gicc->flags & ACPI_MADT_PERFORMANCE_IRQ_MODE)
+		trigger = ACPI_EDGE_SENSITIVE;
+	else
+		trigger = ACPI_LEVEL_SENSITIVE;
+
+	/*
+	 * Helpfully, the MADT GICC doesn't have a polarity flag for the
+	 * "performance interrupt". Luckily, on compliant GICs the polarity is
+	 * fixed in HW (for both SPIs and PPIs), and thus we don't care. Other
+	 * interrupt controllers are not supported with ACPI.
+	 *
+	 * Here we pass in ACPI_ACTIVE_HIGH to keep the core code happy.
+	 */
+	return acpi_register_gsi(NULL, gsi, trigger, ACPI_ACTIVE_HIGH);
+}
+
+static void arm_pmu_acpi_unparse_irq(int cpu)
+{
+	struct acpi_madt_generic_interrupt *gicc;
+	int gsi;
+
+	gicc = acpi_cpu_get_madt_gicc(cpu);
+	if (!gicc)
+		return;
+
+	gsi = gicc->performance_interrupt;
+	acpi_unregister_gsi(gsi);
+}
+
+static int arm_pmu_acpi_parse_irqs(void)
+{
+	int cpu, irq;
+
+	for_each_possible_cpu(cpu) {
+		int irq = arm_pmu_acpi_parse_irq(cpu);
+		if (irq < 0) {
+			pr_warn("Unable to parse ACPI PMU IRQ for CPU%d: %d\n",
+				cpu, irq);
+			goto out_err;
+		} else if (irq == 0) {
+			pr_warn("No ACPI PMU IRQ for CPU%d\n", cpu);
+		}
+
+		per_cpu(pmu_irqs, cpu) = irq;
+	}
+
+	return 0;
+
+out_err:
+	for_each_possible_cpu(cpu) {
+		arm_pmu_acpi_unparse_irq(cpu);
+		per_cpu(pmu_irqs, cpu) = 0;
+	}
+
+	return irq;
+}
+
+static struct arm_pmu *arm_pmu_acpi_find_alloc_pmu(void)
+{
+	unsigned long cpuid = read_cpuid_id();
+	struct arm_pmu *pmu;
+	int cpu;
+
+	for_each_possible_cpu(cpu) {
+		pmu = per_cpu(probed_pmus, cpu);
+		if (!pmu || pmu->acpi_cpuid != cpuid)
+			continue;
+
+		return pmu;
+	}
+
+	pmu = armpmu_alloc();
+	if (!pmu) {
+		pr_warn("Unable to allocate PMU for CPU%d\n",
+			smp_processor_id());
+		return NULL;
+	}
+
+	pmu->acpi_cpuid = cpuid;
+
+	return pmu;
+}
+
+/*
+ * This must run before the common arm_pmu hotplug logic, so that we can
+ * associate a CPU and its interrupt before the common code tries to manage the
+ * affinity and so on.
+ *
+ * Note that hotplug events are serialized, so we cannot race with another CPU
+ * coming up. The perf core won't open events while a hotplug event is in
+ * progress.
+ */
+static int arm_pmu_acpi_cpu_starting(unsigned int cpu)
+{
+	struct arm_pmu *pmu;
+	struct pmu_hw_events __percpu *hw_events;
+	int irq;
+
+	/* If we've already probed this CPU, we have nothing to do */
+	if (per_cpu(probed_pmus, cpu))
+		return 0;
+
+	irq = per_cpu(pmu_irqs, cpu);
+
+	pmu = arm_pmu_acpi_find_alloc_pmu();
+	if (!pmu)
+		return -ENOMEM;
+
+	cpumask_set_cpu(cpu, &pmu->supported_cpus);
+
+	per_cpu(probed_pmus, cpu) = pmu;
+
+	/*
+	 * Log and request the IRQ so the core arm_pmu code can manage it.  In
+	 * some situations (e.g. mismatched PPIs), we may fail to request the
+	 * IRQ. However, it may be too late for us to do anything about it.
+	 * The common ARM PMU code will log a warning in this case.
+	 */
+	hw_events = pmu->hw_events;
+	per_cpu(hw_events->irq, cpu) = irq;
+	armpmu_request_irq(pmu, cpu);
+
+	/*
+	 * Ideally, we'd probe the PMU here when we find the first matching
+	 * CPU. We can't do that for several reasons; see the comment in
+	 * arm_pmu_acpi_init().
+	 *
+	 * So for the time being, we're done.
+	 */
+	return 0;
+}
+
+int arm_pmu_acpi_probe(armpmu_init_fn init_fn)
+{
+	int pmu_idx = 0;
+	int cpu, ret;
+
+	if (acpi_disabled)
+		return 0;
+
+	/*
+	 * Initialise and register the set of PMUs which we know about right
+	 * now. Ideally we'd do this in arm_pmu_acpi_cpu_starting() so that we
+	 * could handle late hotplug, but this may lead to deadlock since we
+	 * might try to register a hotplug notifier instance from within a
+	 * hotplug notifier.
+	 *
+	 * There's also the problem of having access to the right init_fn,
+	 * without tying this too deeply into the "real" PMU driver.
+	 *
+	 * For the moment, as with the platform/DT case, we need at least one
+	 * of a PMU's CPUs to be online at probe time.
+	 */
+	for_each_possible_cpu(cpu) {
+		struct arm_pmu *pmu = per_cpu(probed_pmus, cpu);
+		char *base_name;
+
+		if (!pmu || pmu->name)
+			continue;
+
+		ret = init_fn(pmu);
+		if (ret == -ENODEV) {
+			/* PMU not handled by this driver, or not present */
+			continue;
+		} else if (ret) {
+			pr_warn("Unable to initialise PMU for CPU%d\n", cpu);
+			return ret;
+		}
+
+		base_name = pmu->name;
+		pmu->name = kasprintf(GFP_KERNEL, "%s_%d", base_name, pmu_idx++);
+		if (!pmu->name) {
+			pr_warn("Unable to allocate PMU name for CPU%d\n", cpu);
+			return -ENOMEM;
+		}
+
+		ret = armpmu_register(pmu);
+		if (ret) {
+			pr_warn("Failed to register PMU for CPU%d\n", cpu);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static int arm_pmu_acpi_init(void)
+{
+	int ret;
+
+	if (acpi_disabled)
+		return 0;
+
+	/*
+	 * We can't request IRQs yet, since we don't know the cookie value
+	 * until we know which CPUs share the same logical PMU. We'll handle
+	 * that in arm_pmu_acpi_cpu_starting().
+	 */
+	ret = arm_pmu_acpi_parse_irqs();
+	if (ret)
+		return ret;
+
+	ret = cpuhp_setup_state(CPUHP_AP_PERF_ARM_ACPI_STARTING,
+				"perf/arm/pmu_acpi:starting",
+				arm_pmu_acpi_cpu_starting, NULL);
+
+	return ret;
+}
+subsys_initcall(arm_pmu_acpi_init)
diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
index cfcfab3..0f2a803 100644
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -94,6 +94,7 @@ enum cpuhp_state {
 	CPUHP_AP_ARM_VFP_STARTING,
 	CPUHP_AP_ARM64_DEBUG_MONITORS_STARTING,
 	CPUHP_AP_PERF_ARM_HW_BREAKPOINT_STARTING,
+	CPUHP_AP_PERF_ARM_ACPI_STARTING,
 	CPUHP_AP_PERF_ARM_STARTING,
 	CPUHP_AP_ARM_L2X0_STARTING,
 	CPUHP_AP_ARM_ARCH_TIMER_STARTING,
diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index 25556eb..1360dd6 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -117,6 +117,9 @@ struct arm_pmu {
 	struct notifier_block	cpu_pm_nb;
 	/* the attr_groups array must be NULL-terminated */
 	const struct attribute_group *attr_groups[ARMPMU_NR_ATTR_GROUPS + 1];
+
+	/* Only to be used by ACPI probing code */
+	unsigned long acpi_cpuid;
 };
 
 #define to_arm_pmu(p) (container_of(p, struct arm_pmu, pmu))
@@ -159,12 +162,20 @@ int arm_pmu_device_probe(struct platform_device *pdev,
 			 const struct of_device_id *of_table,
 			 const struct pmu_probe_info *probe_table);
 
+#ifdef CONFIG_ACPI
+int arm_pmu_acpi_probe(armpmu_init_fn init_fn);
+#else
+static inline int arm_pmu_acpi_probe(armpmu_init_fn init_fn) { return 0; }
+#endif
+
 /* Internal functions only for core arm_pmu code */
 struct arm_pmu *armpmu_alloc(void);
 void armpmu_free(struct arm_pmu *pmu);
 int armpmu_register(struct arm_pmu *pmu);
 int armpmu_request_irqs(struct arm_pmu *armpmu);
 void armpmu_free_irqs(struct arm_pmu *armpmu);
+int armpmu_request_irq(struct arm_pmu *armpmu, int cpu);
+void armpmu_free_irq(struct arm_pmu *armpmu, int cpu);
 
 #define ARMV8_PMU_PDEV_NAME "armv8-pmu"
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCHv2 15/16] arm64: pmuv3: handle !PMUv3 when probing
  2017-04-06 18:29 [PATCHv2 00/16] arm_pmu: ACPI support Mark Rutland
                   ` (13 preceding siblings ...)
  2017-04-06 18:29 ` [PATCHv2 14/16] drivers/perf: arm_pmu: add ACPI framework Mark Rutland
@ 2017-04-06 18:29 ` Mark Rutland
  2017-04-07 14:29   ` Will Deacon
  2017-04-06 18:29 ` [PATCHv2 16/16] arm64: pmuv3: use arm_pmu ACPI framework Mark Rutland
  2017-04-07 14:30 ` [PATCHv2 00/16] arm_pmu: ACPI support Will Deacon
  16 siblings, 1 reply; 25+ messages in thread
From: Mark Rutland @ 2017-04-06 18:29 UTC (permalink / raw)
  To: linux-arm-kernel

When probing via ACPI, we won't know up-front whether a CPU has a PMUv3
compatible PMU. Thus we need to consult ID registers during probe time.

This patch updates our PMUv3 probing code to test for the presence of
PMUv3 functionality before touching an PMUv3-specific registers, and
before updating the struct arm_pmu with PMUv3 data.

When a PMUv3-compatible PMU is not present, probing will return -ENODEV.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/perf_event.c | 85 ++++++++++++++++++++++++++++++++++--------
 1 file changed, 69 insertions(+), 16 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 57ae9d9..50790d7 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -957,11 +957,24 @@ static int armv8_vulcan_map_event(struct perf_event *event)
 				ARMV8_PMU_EVTYPE_EVENT);
 }
 
+struct armv8pmu_probe_info {
+	struct arm_pmu *pmu;
+	bool present;
+};
+
 static void __armv8pmu_probe_pmu(void *info)
 {
-	struct arm_pmu *cpu_pmu = info;
+	struct armv8pmu_probe_info *probe = info;
+	struct arm_pmu *cpu_pmu = probe->pmu;
+	u64 dfr0;
 	u32 pmceid[2];
 
+	dfr0 = read_sysreg(id_aa64dfr0_el1);
+	if (((dfr0 >> ID_AA64DFR0_PMUVER_SHIFT) & 0xf) != 1)
+		return;
+
+	probe->present = true;
+
 	/* Read the nb of CNTx counters supported from PMNC */
 	cpu_pmu->num_events = (armv8pmu_pmcr_read() >> ARMV8_PMU_PMCR_N_SHIFT)
 		& ARMV8_PMU_PMCR_N_MASK;
@@ -979,13 +992,27 @@ static void __armv8pmu_probe_pmu(void *info)
 
 static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
 {
-	return smp_call_function_any(&cpu_pmu->supported_cpus,
+	struct armv8pmu_probe_info probe = {
+		.pmu = cpu_pmu,
+		.present = false,
+	};
+	int ret;
+
+	ret = smp_call_function_any(&cpu_pmu->supported_cpus,
 				    __armv8pmu_probe_pmu,
-				    cpu_pmu, 1);
+				    &probe, 1);
+	if (ret)
+		return ret;
+
+	return probe.present ? 0 : -ENODEV;
 }
 
-static void armv8_pmu_init(struct arm_pmu *cpu_pmu)
+static int armv8_pmu_init(struct arm_pmu *cpu_pmu)
 {
+	int ret = armv8pmu_probe_pmu(cpu_pmu);
+	if (ret)
+		return ret;
+
 	cpu_pmu->handle_irq		= armv8pmu_handle_irq,
 	cpu_pmu->enable			= armv8pmu_enable_event,
 	cpu_pmu->disable		= armv8pmu_disable_event,
@@ -997,78 +1024,104 @@ static void armv8_pmu_init(struct arm_pmu *cpu_pmu)
 	cpu_pmu->reset			= armv8pmu_reset,
 	cpu_pmu->max_period		= (1LLU << 32) - 1,
 	cpu_pmu->set_event_filter	= armv8pmu_set_event_filter;
+
+	return 0;
 }
 
 static int armv8_pmuv3_init(struct arm_pmu *cpu_pmu)
 {
-	armv8_pmu_init(cpu_pmu);
+	int ret = armv8_pmu_init(cpu_pmu);
+	if (ret)
+		return ret;
+
 	cpu_pmu->name			= "armv8_pmuv3";
 	cpu_pmu->map_event		= armv8_pmuv3_map_event;
 	cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_EVENTS] =
 		&armv8_pmuv3_events_attr_group;
 	cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_FORMATS] =
 		&armv8_pmuv3_format_attr_group;
-	return armv8pmu_probe_pmu(cpu_pmu);
+
+	return 0;
 }
 
 static int armv8_a53_pmu_init(struct arm_pmu *cpu_pmu)
 {
-	armv8_pmu_init(cpu_pmu);
+	int ret = armv8_pmu_init(cpu_pmu);
+	if (ret)
+		return ret;
+
 	cpu_pmu->name			= "armv8_cortex_a53";
 	cpu_pmu->map_event		= armv8_a53_map_event;
 	cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_EVENTS] =
 		&armv8_pmuv3_events_attr_group;
 	cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_FORMATS] =
 		&armv8_pmuv3_format_attr_group;
-	return armv8pmu_probe_pmu(cpu_pmu);
+
+	return 0;
 }
 
 static int armv8_a57_pmu_init(struct arm_pmu *cpu_pmu)
 {
-	armv8_pmu_init(cpu_pmu);
+	int ret = armv8_pmu_init(cpu_pmu);
+	if (ret)
+		return ret;
+
 	cpu_pmu->name			= "armv8_cortex_a57";
 	cpu_pmu->map_event		= armv8_a57_map_event;
 	cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_EVENTS] =
 		&armv8_pmuv3_events_attr_group;
 	cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_FORMATS] =
 		&armv8_pmuv3_format_attr_group;
-	return armv8pmu_probe_pmu(cpu_pmu);
+
+	return 0;
 }
 
 static int armv8_a72_pmu_init(struct arm_pmu *cpu_pmu)
 {
-	armv8_pmu_init(cpu_pmu);
+	int ret = armv8_pmu_init(cpu_pmu);
+	if (ret)
+		return ret;
+
 	cpu_pmu->name			= "armv8_cortex_a72";
 	cpu_pmu->map_event		= armv8_a57_map_event;
 	cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_EVENTS] =
 		&armv8_pmuv3_events_attr_group;
 	cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_FORMATS] =
 		&armv8_pmuv3_format_attr_group;
-	return armv8pmu_probe_pmu(cpu_pmu);
+
+	return 0;
 }
 
 static int armv8_thunder_pmu_init(struct arm_pmu *cpu_pmu)
 {
-	armv8_pmu_init(cpu_pmu);
+	int ret = armv8_pmu_init(cpu_pmu);
+	if (ret)
+		return ret;
+
 	cpu_pmu->name			= "armv8_cavium_thunder";
 	cpu_pmu->map_event		= armv8_thunder_map_event;
 	cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_EVENTS] =
 		&armv8_pmuv3_events_attr_group;
 	cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_FORMATS] =
 		&armv8_pmuv3_format_attr_group;
-	return armv8pmu_probe_pmu(cpu_pmu);
+
+	return 0;
 }
 
 static int armv8_vulcan_pmu_init(struct arm_pmu *cpu_pmu)
 {
-	armv8_pmu_init(cpu_pmu);
+	int ret = armv8_pmu_init(cpu_pmu);
+	if (ret)
+		return ret;
+
 	cpu_pmu->name			= "armv8_brcm_vulcan";
 	cpu_pmu->map_event		= armv8_vulcan_map_event;
 	cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_EVENTS] =
 		&armv8_pmuv3_events_attr_group;
 	cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_FORMATS] =
 		&armv8_pmuv3_format_attr_group;
-	return armv8pmu_probe_pmu(cpu_pmu);
+
+	return 0;
 }
 
 static const struct of_device_id armv8_pmu_of_device_ids[] = {
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCHv2 16/16] arm64: pmuv3: use arm_pmu ACPI framework
  2017-04-06 18:29 [PATCHv2 00/16] arm_pmu: ACPI support Mark Rutland
                   ` (14 preceding siblings ...)
  2017-04-06 18:29 ` [PATCHv2 15/16] arm64: pmuv3: handle !PMUv3 when probing Mark Rutland
@ 2017-04-06 18:29 ` Mark Rutland
  2017-04-07 14:30 ` [PATCHv2 00/16] arm_pmu: ACPI support Will Deacon
  16 siblings, 0 replies; 25+ messages in thread
From: Mark Rutland @ 2017-04-06 18:29 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we have a framework to handle the ACPI bits, make the PMUv3
code use this. The framework is a little different to what was
originally envisaged, and we can drop some unused support code in the
process of moving over to it.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/perf_event.c | 26 +++++++++-----------------
 1 file changed, 9 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 50790d7..48bc505 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -1134,24 +1134,9 @@ static int armv8_vulcan_pmu_init(struct arm_pmu *cpu_pmu)
 	{},
 };
 
-/*
- * Non DT systems have their micro/arch events probed at run-time.
- * A fairly complete list of generic events are provided and ones that
- * aren't supported by the current PMU are disabled.
- */
-static const struct pmu_probe_info armv8_pmu_probe_table[] = {
-	PMU_PROBE(0, 0, armv8_pmuv3_init), /* enable all defined counters */
-	{ /* sentinel value */ }
-};
-
 static int armv8_pmu_device_probe(struct platform_device *pdev)
 {
-	if (acpi_disabled)
-		return arm_pmu_device_probe(pdev, armv8_pmu_of_device_ids,
-					    NULL);
-
-	return arm_pmu_device_probe(pdev, armv8_pmu_of_device_ids,
-				    armv8_pmu_probe_table);
+	return arm_pmu_device_probe(pdev, armv8_pmu_of_device_ids, NULL);
 }
 
 static struct platform_driver armv8_pmu_driver = {
@@ -1162,4 +1147,11 @@ static int armv8_pmu_device_probe(struct platform_device *pdev)
 	.probe		= armv8_pmu_device_probe,
 };
 
-builtin_platform_driver(armv8_pmu_driver);
+int __init armv8_pmu_driver_init(void)
+{
+	if (acpi_disabled)
+		return platform_driver_register(&armv8_pmu_driver);
+	else
+		return arm_pmu_acpi_probe(armv8_pmuv3_init);
+}
+device_initcall(armv8_pmu_driver_init)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCHv2 14/16] drivers/perf: arm_pmu: add ACPI framework
  2017-04-06 18:29 ` [PATCHv2 14/16] drivers/perf: arm_pmu: add ACPI framework Mark Rutland
@ 2017-04-07 14:27   ` Will Deacon
  2017-04-07 14:49     ` Mark Rutland
  0 siblings, 1 reply; 25+ messages in thread
From: Will Deacon @ 2017-04-07 14:27 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Mark,

Just a couple of minor comments below.

On Thu, Apr 06, 2017 at 07:29:21PM +0100, Mark Rutland wrote:
> diff --git a/drivers/perf/arm_pmu_acpi.c b/drivers/perf/arm_pmu_acpi.c
> new file mode 100644
> index 0000000..5cacd7b
> --- /dev/null
> +++ b/drivers/perf/arm_pmu_acpi.c
> @@ -0,0 +1,240 @@
> +/*
> + * ACPI probing code for ARM performance counters.
> + *
> + * Copyright (C) 2017 ARM Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +
> +#include <linux/acpi.h>
> +#include <linux/cpumask.h>
> +#include <linux/init.h>
> +#include <linux/percpu.h>
> +#include <linux/perf/arm_pmu.h>
> +
> +#include <asm/cputype.h>
> +
> +static DEFINE_PER_CPU(struct arm_pmu *, probed_pmus);
> +static DEFINE_PER_CPU(int, pmu_irqs);
> +
> +static int arm_pmu_acpi_parse_irq(int cpu)
> +{
> +	struct acpi_madt_generic_interrupt *gicc;
> +	int gsi, trigger;
> +
> +	gicc = acpi_cpu_get_madt_gicc(cpu);
> +	if (WARN_ON(!gicc))
> +		return -EINVAL;
> +
> +	gsi = gicc->performance_interrupt;
> +	if (gicc->flags & ACPI_MADT_PERFORMANCE_IRQ_MODE)
> +		trigger = ACPI_EDGE_SENSITIVE;
> +	else
> +		trigger = ACPI_LEVEL_SENSITIVE;
> +
> +	/*
> +	 * Helpfully, the MADT GICC doesn't have a polarity flag for the
> +	 * "performance interrupt". Luckily, on compliant GICs the polarity is
> +	 * fixed in HW (for both SPIs and PPIs), and thus we don't care. Other
> +	 * interrupt controllers are not supported with ACPI.
> +	 *
> +	 * Here we pass in ACPI_ACTIVE_HIGH to keep the core code happy.
> +	 */
> +	return acpi_register_gsi(NULL, gsi, trigger, ACPI_ACTIVE_HIGH);
> +}
> +
> +static void arm_pmu_acpi_unparse_irq(int cpu)
> +{

"unparse" is a bit weird. Can you rename this register/unregister please?

> +	struct acpi_madt_generic_interrupt *gicc;
> +	int gsi;
> +
> +	gicc = acpi_cpu_get_madt_gicc(cpu);
> +	if (!gicc)
> +		return;
> +
> +	gsi = gicc->performance_interrupt;
> +	acpi_unregister_gsi(gsi);
> +}
> +
> +static int arm_pmu_acpi_parse_irqs(void)
> +{
> +	int cpu, irq;
> +
> +	for_each_possible_cpu(cpu) {
> +		int irq = arm_pmu_acpi_parse_irq(cpu);
> +		if (irq < 0) {
> +			pr_warn("Unable to parse ACPI PMU IRQ for CPU%d: %d\n",
> +				cpu, irq);
> +			goto out_err;
> +		} else if (irq == 0) {
> +			pr_warn("No ACPI PMU IRQ for CPU%d\n", cpu);
> +		}
> +
> +		per_cpu(pmu_irqs, cpu) = irq;
> +	}
> +
> +	return 0;
> +
> +out_err:
> +	for_each_possible_cpu(cpu) {
> +		arm_pmu_acpi_unparse_irq(cpu);
> +		per_cpu(pmu_irqs, cpu) = 0;
> +	}

It might work at the moment (I really can't tell), but I'd rather we didn't
unregister GSIs that we didn't register.

Will

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCHv2 15/16] arm64: pmuv3: handle !PMUv3 when probing
  2017-04-06 18:29 ` [PATCHv2 15/16] arm64: pmuv3: handle !PMUv3 when probing Mark Rutland
@ 2017-04-07 14:29   ` Will Deacon
  2017-04-07 16:30     ` Mark Rutland
  2017-04-07 18:05     ` Mark Rutland
  0 siblings, 2 replies; 25+ messages in thread
From: Will Deacon @ 2017-04-07 14:29 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Apr 06, 2017 at 07:29:22PM +0100, Mark Rutland wrote:
> When probing via ACPI, we won't know up-front whether a CPU has a PMUv3
> compatible PMU. Thus we need to consult ID registers during probe time.
> 
> This patch updates our PMUv3 probing code to test for the presence of
> PMUv3 functionality before touching an PMUv3-specific registers, and
> before updating the struct arm_pmu with PMUv3 data.
> 
> When a PMUv3-compatible PMU is not present, probing will return -ENODEV.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/kernel/perf_event.c | 85 ++++++++++++++++++++++++++++++++++--------
>  1 file changed, 69 insertions(+), 16 deletions(-)
> 
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index 57ae9d9..50790d7 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -957,11 +957,24 @@ static int armv8_vulcan_map_event(struct perf_event *event)
>  				ARMV8_PMU_EVTYPE_EVENT);
>  }
>  
> +struct armv8pmu_probe_info {
> +	struct arm_pmu *pmu;
> +	bool present;
> +};
> +
>  static void __armv8pmu_probe_pmu(void *info)
>  {
> -	struct arm_pmu *cpu_pmu = info;
> +	struct armv8pmu_probe_info *probe = info;
> +	struct arm_pmu *cpu_pmu = probe->pmu;
> +	u64 dfr0;
>  	u32 pmceid[2];
>  
> +	dfr0 = read_sysreg(id_aa64dfr0_el1);
> +	if (((dfr0 >> ID_AA64DFR0_PMUVER_SHIFT) & 0xf) != 1)
> +		return;

Shouldn't we be using one of those fancy cpuid_feature_extract_*_field helpers?

It would also be worth spinning this up on qemu, if you get a chance, as
I don't think that implements the PMU.

Will

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCHv2 00/16] arm_pmu: ACPI support
  2017-04-06 18:29 [PATCHv2 00/16] arm_pmu: ACPI support Mark Rutland
                   ` (15 preceding siblings ...)
  2017-04-06 18:29 ` [PATCHv2 16/16] arm64: pmuv3: use arm_pmu ACPI framework Mark Rutland
@ 2017-04-07 14:30 ` Will Deacon
  2017-04-07 16:13   ` Mark Rutland
  16 siblings, 1 reply; 25+ messages in thread
From: Will Deacon @ 2017-04-07 14:30 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Apr 06, 2017 at 07:29:07PM +0100, Mark Rutland wrote:
> This series implements ACPI support in the ARM PMU code. It borrows some code
> from Jeremy's series [1], but takes a different approach to probing and
> association, using the usual hotplug state machine, minimising external changes
> required, and simplifying the relationship with the common arm_pmu code.

Patches 12 and 13 seem unrelated to this series, and I've made some small
comments on a couple of other patches. Other than that, looks good.

Will

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCHv2 14/16] drivers/perf: arm_pmu: add ACPI framework
  2017-04-07 14:27   ` Will Deacon
@ 2017-04-07 14:49     ` Mark Rutland
  0 siblings, 0 replies; 25+ messages in thread
From: Mark Rutland @ 2017-04-07 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Apr 07, 2017 at 03:27:06PM +0100, Will Deacon wrote:
> Hi Mark,
> 
> Just a couple of minor comments below.
> 
> On Thu, Apr 06, 2017 at 07:29:21PM +0100, Mark Rutland wrote:

> > +static int arm_pmu_acpi_parse_irq(int cpu)

> > +static void arm_pmu_acpi_unparse_irq(int cpu)

> "unparse" is a bit weird. Can you rename this register/unregister please?

Sure thing; that's much better.

[...]

> > +static int arm_pmu_acpi_parse_irqs(void)
> > +{
> > +	int cpu, irq;
> > +
> > +	for_each_possible_cpu(cpu) {
> > +		int irq = arm_pmu_acpi_parse_irq(cpu);
> > +		if (irq < 0) {
> > +			pr_warn("Unable to parse ACPI PMU IRQ for CPU%d: %d\n",
> > +				cpu, irq);
> > +			goto out_err;
> > +		} else if (irq == 0) {
> > +			pr_warn("No ACPI PMU IRQ for CPU%d\n", cpu);
> > +		}
> > +
> > +		per_cpu(pmu_irqs, cpu) = irq;
> > +	}
> > +
> > +	return 0;
> > +
> > +out_err:
> > +	for_each_possible_cpu(cpu) {
> > +		arm_pmu_acpi_unparse_irq(cpu);
> > +		per_cpu(pmu_irqs, cpu) = 0;
> > +	}
> 
> It might work at the moment (I really can't tell), but I'd rather we didn't
> unregister GSIs that we didn't register.

Urgh, yes.

That gets really more fun if you consider PPIs (which could be
mismatched).

If we're happy with a hideous loop we can do:

	int irq, cpu, irq_cpu;

	for_each_possible_cpu(cpu) {
		irq = per_cpu(pmu_irqs, cpu);
		if (!irq)
			continue;

		arm_pmu_acpi_unparse_irq(cpu);

		/* blat all instances of the irq we just freed */
		for_each_possible_cpu(irq_cpu) {
			if (per_cpu(pmu_irqs, irq_cpu) != irq)
				continue;
			per_cpu(pmu_irqs, irq_cpu) = 0;
		}
	}

... which I'll spin for v3.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCHv2 00/16] arm_pmu: ACPI support
  2017-04-07 14:30 ` [PATCHv2 00/16] arm_pmu: ACPI support Will Deacon
@ 2017-04-07 16:13   ` Mark Rutland
  0 siblings, 0 replies; 25+ messages in thread
From: Mark Rutland @ 2017-04-07 16:13 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Apr 07, 2017 at 03:30:06PM +0100, Will Deacon wrote:
> On Thu, Apr 06, 2017 at 07:29:07PM +0100, Mark Rutland wrote:
> > This series implements ACPI support in the ARM PMU code. It borrows some code
> > from Jeremy's series [1], but takes a different approach to probing and
> > association, using the usual hotplug state machine, minimising external changes
> > required, and simplifying the relationship with the common arm_pmu code.
> 
> Patches 12 and 13 seem unrelated to this series, and I've made some small
> comments on a couple of other patches.

Those were cleanup that relied on the GICC rework in this series.

I'll drop those for v3, and send them separately once the rest is in.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCHv2 15/16] arm64: pmuv3: handle !PMUv3 when probing
  2017-04-07 14:29   ` Will Deacon
@ 2017-04-07 16:30     ` Mark Rutland
  2017-04-07 18:05     ` Mark Rutland
  1 sibling, 0 replies; 25+ messages in thread
From: Mark Rutland @ 2017-04-07 16:30 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Apr 07, 2017 at 03:29:09PM +0100, Will Deacon wrote:
> >  static void __armv8pmu_probe_pmu(void *info)
> >  {
> > -	struct arm_pmu *cpu_pmu = info;
> > +	struct armv8pmu_probe_info *probe = info;
> > +	struct arm_pmu *cpu_pmu = probe->pmu;
> > +	u64 dfr0;
> >  	u32 pmceid[2];
> >  
> > +	dfr0 = read_sysreg(id_aa64dfr0_el1);
> > +	if (((dfr0 >> ID_AA64DFR0_PMUVER_SHIFT) & 0xf) != 1)
> > +		return;
> 
> Shouldn't we be using one of those fancy cpuid_feature_extract_*_field helpers?

Sure. I can make this:

	u64 dfr0, pmuver;

	...

	dfr0 = read_sysreg(id_aa64dfr0_el1);
	pmuver = cpuid_feature_extract_unsigned_field(dfr0,
			ID_AA64DFR0_PMUVER_SHIFT);
	if (pmuver != 1)
		return;

> It would also be worth spinning this up on qemu, if you get a chance, as
> I don't think that implements the PMU.

I will try to give that a go shortly.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCHv2 15/16] arm64: pmuv3: handle !PMUv3 when probing
  2017-04-07 14:29   ` Will Deacon
  2017-04-07 16:30     ` Mark Rutland
@ 2017-04-07 18:05     ` Mark Rutland
  2017-04-07 19:09       ` Jeremy Linton
  1 sibling, 1 reply; 25+ messages in thread
From: Mark Rutland @ 2017-04-07 18:05 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Apr 07, 2017 at 03:29:09PM +0100, Will Deacon wrote:
> On Thu, Apr 06, 2017 at 07:29:22PM +0100, Mark Rutland wrote:
> > When probing via ACPI, we won't know up-front whether a CPU has a PMUv3
> > compatible PMU. Thus we need to consult ID registers during probe time.
> > 
> > This patch updates our PMUv3 probing code to test for the presence of
> > PMUv3 functionality before touching an PMUv3-specific registers, and
> > before updating the struct arm_pmu with PMUv3 data.

[...]

> It would also be worth spinning this up on qemu, if you get a chance, as
> I don't think that implements the PMU.

That raises the usual problem with GSIs in static ACPI tables; the MADT
GICC doesn't have any way of describing the absense of a PMU interrupt.

Booting a TCG qemu, the PMU GSI is zero, which is a valid GSI...

There's a spec bug to fix here.

Urrgh.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCHv2 15/16] arm64: pmuv3: handle !PMUv3 when probing
  2017-04-07 18:05     ` Mark Rutland
@ 2017-04-07 19:09       ` Jeremy Linton
  0 siblings, 0 replies; 25+ messages in thread
From: Jeremy Linton @ 2017-04-07 19:09 UTC (permalink / raw)
  To: linux-arm-kernel

On 04/07/2017 01:05 PM, Mark Rutland wrote:
> On Fri, Apr 07, 2017 at 03:29:09PM +0100, Will Deacon wrote:
>> On Thu, Apr 06, 2017 at 07:29:22PM +0100, Mark Rutland wrote:
>>> When probing via ACPI, we won't know up-front whether a CPU has a PMUv3
>>> compatible PMU. Thus we need to consult ID registers during probe time.
>>>
>>> This patch updates our PMUv3 probing code to test for the presence of
>>> PMUv3 functionality before touching an PMUv3-specific registers, and
>>> before updating the struct arm_pmu with PMUv3 data.
>
> [...]
>
>> It would also be worth spinning this up on qemu, if you get a chance, as
>> I don't think that implements the PMU.
>
> That raises the usual problem with GSIs in static ACPI tables; the MADT
> GICC doesn't have any way of describing the absense of a PMU interrupt.
>
> Booting a TCG qemu, the PMU GSI is zero, which is a valid GSI...
>
> There's a spec bug to fix here.

Or a QEMU bug... The SBSA mandates a PMU AFAIK. So you won't have an 
ACPI system without one.


>
> Urrgh.
>
> Thanks,
> Mark.
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2017-04-07 19:09 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-06 18:29 [PATCHv2 00/16] arm_pmu: ACPI support Mark Rutland
2017-04-06 18:29 ` [PATCHv2 01/16] drivers/perf: arm_pmu: remove pointless PMU disabling Mark Rutland
2017-04-06 18:29 ` [PATCHv2 02/16] drivers/perf: arm_pmu: define armpmu_init_fn Mark Rutland
2017-04-06 18:29 ` [PATCHv2 03/16] drivers/perf: arm_pmu: fold init into alloc Mark Rutland
2017-04-06 18:29 ` [PATCHv2 04/16] drivers/perf: arm_pmu: factor out pmu registration Mark Rutland
2017-04-06 18:29 ` [PATCHv2 05/16] drivers/perf: arm_pmu: simplify cpu_pmu_request_irqs() Mark Rutland
2017-04-06 18:29 ` [PATCHv2 06/16] drivers/perf: arm_pmu: handle no platform_device Mark Rutland
2017-04-06 18:29 ` [PATCHv2 07/16] drivers/perf: arm_pmu: rename irq request/free functions Mark Rutland
2017-04-06 18:29 ` [PATCHv2 08/16] drivers/perf: arm_pmu: split cpu-local irq request/free Mark Rutland
2017-04-06 18:29 ` [PATCHv2 09/16] drivers/perf: arm_pmu: move irq request/free into probe Mark Rutland
2017-04-06 18:29 ` [PATCHv2 10/16] drivers/perf: arm_pmu: split out platform device probe logic Mark Rutland
2017-04-06 18:29 ` [PATCHv2 11/16] arm64: add function to get a cpu's MADT GICC table Mark Rutland
2017-04-06 18:29 ` [PATCHv2 12/16] arm64: parking: kill acpi_set_mailbox_entry() Mark Rutland
2017-04-06 18:29 ` [PATCHv2 13/16] arm64: parking: fix type endianness Mark Rutland
2017-04-06 18:29 ` [PATCHv2 14/16] drivers/perf: arm_pmu: add ACPI framework Mark Rutland
2017-04-07 14:27   ` Will Deacon
2017-04-07 14:49     ` Mark Rutland
2017-04-06 18:29 ` [PATCHv2 15/16] arm64: pmuv3: handle !PMUv3 when probing Mark Rutland
2017-04-07 14:29   ` Will Deacon
2017-04-07 16:30     ` Mark Rutland
2017-04-07 18:05     ` Mark Rutland
2017-04-07 19:09       ` Jeremy Linton
2017-04-06 18:29 ` [PATCHv2 16/16] arm64: pmuv3: use arm_pmu ACPI framework Mark Rutland
2017-04-07 14:30 ` [PATCHv2 00/16] arm_pmu: ACPI support Will Deacon
2017-04-07 16:13   ` Mark Rutland

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.