linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] qcom: add l2 cache perf events driver
@ 2016-06-03 21:03 Neil Leeder
  2016-06-03 21:03 ` [PATCH 1/2] perf: allow add to change event state Neil Leeder
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Neil Leeder @ 2016-06-03 21:03 UTC (permalink / raw)
  To: Catalin Marinas, WillDeacon, Mark Rutland, Peter Zijlstra,
	Ingo Molnar, Arnaldo Carvalho de Melo
  Cc: linux-arm-msm, linux-kernel, linux-arm-kernel, Mark Langsdorf,
	Mark Salter, Jon Masters, Timur Tabi, cov, Neil Leeder

This adds a new dynamic PMU to the Perf Events framework to program
and control the L2 cache PMUs in some Qualcomm Technologies SOCs.

The driver exports formatting and event information to sysfs so it can
be used by the perf user space tools with the syntax:
perf stat -e l2cache/event=0x42/

One point to note is that there are certain combinations of events
which are invalid, and which are detected in event_add(). Simply having
event_add() fail would result in event_sched_in() making it Inactive,
treating it as over-allocation of counters, leading to
repeated attempts to allocate the events and ending up with a
statistical count.  A solution for this situation is to turn the
conflicting event off in event_add(). This allows a single error
message to be generated, and no recurring attempts to re-add
the invalid event. In order for this to work, event_sched_in()
needs to detect that event_add() changed the state, and not override it
and force it to Inactive.

This patchset requires:
[PATCH] soc: qcom: provide mechanism for drivers to access L2 registers

Neil Leeder (2):
  perf: allow add to change event state
  soc: qcom: add l2 cache perf events driver

 drivers/soc/qcom/Kconfig               |  10 +
 drivers/soc/qcom/Makefile              |   1 +
 drivers/soc/qcom/perf_event_l2.c       | 917 +++++++++++++++++++++++++++++++++
 include/linux/soc/qcom/perf_event_l2.h |  82 +++
 kernel/events/core.c                   |   3 +-
 5 files changed, 1012 insertions(+), 1 deletion(-)
 create mode 100644 drivers/soc/qcom/perf_event_l2.c
 create mode 100644 include/linux/soc/qcom/perf_event_l2.h

-- 
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/2] perf: allow add to change event state
  2016-06-03 21:03 [PATCH 0/2] qcom: add l2 cache perf events driver Neil Leeder
@ 2016-06-03 21:03 ` Neil Leeder
  2016-06-03 21:38   ` Peter Zijlstra
  2016-06-03 21:03 ` [PATCH 2/2] soc: qcom: add l2 cache perf events driver Neil Leeder
  2016-06-06  9:04 ` [PATCH 0/2] " Mark Rutland
  2 siblings, 1 reply; 13+ messages in thread
From: Neil Leeder @ 2016-06-03 21:03 UTC (permalink / raw)
  To: Catalin Marinas, WillDeacon, Mark Rutland, Peter Zijlstra,
	Ingo Molnar, Arnaldo Carvalho de Melo
  Cc: linux-arm-msm, linux-kernel, linux-arm-kernel, Mark Langsdorf,
	Mark Salter, Jon Masters, Timur Tabi, cov, Neil Leeder

When the platform-specific pmu->add function returns
an error, it may have also changed the event's state.
If so, do not override that new state.

Signed-off-by: Neil Leeder <nleeder@codeaurora.org>
---
 kernel/events/core.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index c0ded24..95c4cf3d3 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1952,7 +1952,8 @@ event_sched_in(struct perf_event *event,
 	perf_log_itrace_start(event);
 
 	if (event->pmu->add(event, PERF_EF_START)) {
-		event->state = PERF_EVENT_STATE_INACTIVE;
+		if (event->state == PERF_EVENT_STATE_ACTIVE)
+			event->state = PERF_EVENT_STATE_INACTIVE;
 		event->oncpu = -1;
 		ret = -EAGAIN;
 		goto out;
-- 
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project.

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/2] soc: qcom: add l2 cache perf events driver
  2016-06-03 21:03 [PATCH 0/2] qcom: add l2 cache perf events driver Neil Leeder
  2016-06-03 21:03 ` [PATCH 1/2] perf: allow add to change event state Neil Leeder
@ 2016-06-03 21:03 ` Neil Leeder
  2016-06-06  9:51   ` Mark Rutland
  2016-06-06  9:04 ` [PATCH 0/2] " Mark Rutland
  2 siblings, 1 reply; 13+ messages in thread
From: Neil Leeder @ 2016-06-03 21:03 UTC (permalink / raw)
  To: Catalin Marinas, WillDeacon, Mark Rutland, Peter Zijlstra,
	Ingo Molnar, Arnaldo Carvalho de Melo
  Cc: linux-arm-msm, linux-kernel, linux-arm-kernel, Mark Langsdorf,
	Mark Salter, Jon Masters, Timur Tabi, cov, Neil Leeder

Adds perf events support for L2 cache PMU.

The L2 cache PMU driver is named 'l2cache' and can be used
with perf events to profile L2 events such as cache hits
and misses.

Signed-off-by: Neil Leeder <nleeder@codeaurora.org>
---
 drivers/soc/qcom/Kconfig               |  10 +
 drivers/soc/qcom/Makefile              |   1 +
 drivers/soc/qcom/perf_event_l2.c       | 917 +++++++++++++++++++++++++++++++++
 include/linux/soc/qcom/perf_event_l2.h |  82 +++
 4 files changed, 1010 insertions(+)
 create mode 100644 drivers/soc/qcom/perf_event_l2.c
 create mode 100644 include/linux/soc/qcom/perf_event_l2.h

diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
index 21ec616..0b5ddb9 100644
--- a/drivers/soc/qcom/Kconfig
+++ b/drivers/soc/qcom/Kconfig
@@ -19,6 +19,16 @@ config QCOM_L2_ACCESSORS
 	  Provides support for accessing registers in the L2 cache
 	  for Qualcomm Technologies chips.
 
+config QCOM_PERF_EVENTS_L2
+	bool "Qualcomm Technologies L2-cache perf events"
+	depends on ARCH_QCOM && HW_PERF_EVENTS && ACPI
+	select QCOM_L2_ACCESSORS
+	  help
+	  Provides support for the L2 cache performance monitor unit (PMU)
+	  in Qualcomm Technologies processors.
+	  Adds the L2 cache PMU into the perf events subsystem for
+	  monitoring L2 cache events.
+
 config QCOM_PM
 	bool "Qualcomm Power Management"
 	depends on ARCH_QCOM && !ARM64
diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile
index 6ef29b9..c8e89ca9 100644
--- a/drivers/soc/qcom/Makefile
+++ b/drivers/soc/qcom/Makefile
@@ -1,5 +1,6 @@
 obj-$(CONFIG_QCOM_GSBI)	+=	qcom_gsbi.o
 obj-$(CONFIG_QCOM_L2_ACCESSORS) += l2-accessors.o
+obj-$(CONFIG_QCOM_PERF_EVENTS_L2)	+= perf_event_l2.o
 obj-$(CONFIG_QCOM_PM)	+=	spm.o
 obj-$(CONFIG_QCOM_SMD) +=	smd.o
 obj-$(CONFIG_QCOM_SMD_RPM)	+= smd-rpm.o
diff --git a/drivers/soc/qcom/perf_event_l2.c b/drivers/soc/qcom/perf_event_l2.c
new file mode 100644
index 0000000..2485b9e
--- /dev/null
+++ b/drivers/soc/qcom/perf_event_l2.c
@@ -0,0 +1,917 @@
+/* Copyright (c) 2015,2016 The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+#define pr_fmt(fmt) "l2 perfevents: " fmt
+
+#include <linux/module.h>
+#include <linux/bitops.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/irq.h>
+#include <linux/list.h>
+#include <linux/acpi.h>
+#include <linux/perf_event.h>
+#include <linux/platform_device.h>
+#include <linux/soc/qcom/perf_event_l2.h>
+#include <linux/soc/qcom/l2-accessors.h>
+
+/*
+ * The cache is made-up of one or more slices, each slice has its own PMU.
+ * This structure represents one of the hardware PMUs.
+ */
+struct hml2_pmu {
+	struct list_head entry;
+	struct perf_event *events[MAX_L2_CTRS];
+	unsigned long used_mask[BITS_TO_LONGS(MAX_L2_EVENTS)];
+	unsigned int valid_cpus;
+	int on_cpu;
+	u8 cpu[MAX_CPUS_IN_CLUSTER];
+	atomic64_t prev_count[MAX_L2_CTRS];
+	spinlock_t pmu_lock;
+};
+
+/*
+ * Aggregate PMU. Implements the core pmu functions and manages
+ * the hardware PMUs.
+ */
+struct l2cache_pmu {
+	u32 num_pmus;
+	struct list_head pmus;
+	struct pmu pmu;
+	int num_counters;
+};
+
+#define to_l2cache_pmu(p) (container_of(p, struct l2cache_pmu, pmu))
+
+static DEFINE_PER_CPU(struct hml2_pmu *, cpu_to_pmu);
+static struct l2cache_pmu l2cache_pmu = { 0 };
+static int num_cpus_in_cluster;
+
+static u32 l2_cycle_ctr_idx;
+static u32 l2_reset_mask;
+static u32 mpidr_affl1_shift;
+
+static inline u32 idx_to_reg(u32 idx)
+{
+	u32 bit;
+
+	if (idx == l2_cycle_ctr_idx)
+		bit = BIT(L2CYCLE_CTR_BIT);
+	else
+		bit = BIT(idx);
+	return bit;
+}
+
+static struct hml2_pmu *get_hml2_pmu(struct l2cache_pmu *system, int cpu)
+{
+	if (cpu < 0)
+		cpu = smp_processor_id();
+
+	return per_cpu(cpu_to_pmu, cpu);
+}
+
+static void hml2_pmu__reset_on_slice(void *x)
+{
+	/* Reset all ctrs */
+	set_l2_indirect_reg(L2PMCR, L2PMCR_RESET_ALL);
+	set_l2_indirect_reg(L2PMCNTENCLR, l2_reset_mask);
+	set_l2_indirect_reg(L2PMINTENCLR, l2_reset_mask);
+	set_l2_indirect_reg(L2PMOVSCLR, l2_reset_mask);
+}
+
+static inline void hml2_pmu__reset(struct hml2_pmu *slice)
+{
+	int i;
+
+	if (per_cpu(cpu_to_pmu, smp_processor_id()) == slice) {
+		hml2_pmu__reset_on_slice(NULL);
+		return;
+	}
+
+	/* Call each cpu in the cluster until one works */
+	for (i = 0; i < slice->valid_cpus; i++) {
+		if (!smp_call_function_single(slice->cpu[i],
+					      hml2_pmu__reset_on_slice,
+					      NULL, 1))
+			return;
+	}
+
+	pr_err("Failed to reset on cluster with cpu %d\n", slice->cpu[0]);
+}
+
+static inline void hml2_pmu__init(struct hml2_pmu *slice)
+{
+	hml2_pmu__reset(slice);
+}
+
+static inline void hml2_pmu__enable(void)
+{
+	/* Ensure all programming commands are done before proceeding */
+	wmb();
+	set_l2_indirect_reg(L2PMCR, L2PMCR_GLOBAL_ENABLE);
+}
+
+static inline void hml2_pmu__disable(void)
+{
+	set_l2_indirect_reg(L2PMCR, L2PMCR_GLOBAL_DISABLE);
+	/* Ensure the basic counter unit is stopped before proceeding */
+	wmb();
+}
+
+static inline void hml2_pmu__counter_set_value(u32 idx, u64 value)
+{
+	u32 counter_reg;
+
+	if (idx == l2_cycle_ctr_idx) {
+		set_l2_indirect_reg(L2PMCCNTR, value);
+	} else {
+		counter_reg = (idx * 16) + IA_L2PMXEVCNTR_BASE;
+		set_l2_indirect_reg(counter_reg, (u32)(value & 0xFFFFFFFF));
+	}
+}
+
+static inline u64 hml2_pmu__counter_get_value(u32 idx)
+{
+	u64 value;
+	u32 counter_reg;
+
+	if (idx == l2_cycle_ctr_idx) {
+		value = get_l2_indirect_reg(L2PMCCNTR);
+	} else {
+		counter_reg = (idx * 16) + IA_L2PMXEVCNTR_BASE;
+		value = get_l2_indirect_reg(counter_reg);
+	}
+
+	return value;
+}
+
+static inline void hml2_pmu__counter_enable(u32 idx)
+{
+	u32 reg;
+
+	reg = get_l2_indirect_reg(L2PMCNTENSET);
+	reg |= idx_to_reg(idx);
+	set_l2_indirect_reg(L2PMCNTENSET, reg);
+}
+
+static inline void hml2_pmu__counter_disable(u32 idx)
+{
+	set_l2_indirect_reg(L2PMCNTENCLR, idx_to_reg(idx));
+}
+
+static inline void hml2_pmu__counter_enable_interrupt(u32 idx)
+{
+	u32 reg;
+
+	reg = get_l2_indirect_reg(L2PMINTENSET);
+	reg |= idx_to_reg(idx);
+	set_l2_indirect_reg(L2PMINTENSET, reg);
+}
+
+static inline void hml2_pmu__counter_disable_interrupt(u32 idx)
+{
+	set_l2_indirect_reg(L2PMINTENCLR, idx_to_reg(idx));
+}
+
+static inline void hml2_pmu__set_evcntcr(u32 ctr, u32 val)
+{
+	u32 evtcr_reg = (ctr * IA_L2_REG_OFFSET) + IA_L2PMXEVCNTCR_BASE;
+
+	set_l2_indirect_reg(evtcr_reg, val);
+}
+
+static inline void hml2_pmu__set_evtyper(u32 val, u32 ctr)
+{
+	u32 evtype_reg = (ctr * IA_L2_REG_OFFSET) + IA_L2PMXEVTYPER_BASE;
+
+	set_l2_indirect_reg(evtype_reg, val);
+}
+
+static void hml2_pmu__set_evres(struct hml2_pmu *slice,
+				u32 event_group, u32 event_reg, u32 event_cc)
+{
+	u64 group_val;
+	u64 group_mask;
+	u64 resr_val;
+	u32 shift;
+	unsigned long iflags;
+
+	shift = 8 * event_group;
+	group_val = ((u64)(event_cc & 0xff) << shift) | L2PMRESR_EN;
+	group_mask = ~(GENMASK_ULL(shift+7, shift));
+
+	spin_lock_irqsave(&slice->pmu_lock, iflags);
+
+	resr_val = get_l2_indirect_reg(L2PMRESR);
+	resr_val &= group_mask;
+	resr_val |= group_val;
+	set_l2_indirect_reg(L2PMRESR, resr_val);
+
+	spin_unlock_irqrestore(&slice->pmu_lock, iflags);
+}
+
+static void hml2_pmu__set_evfilter_task_mode(int ctr)
+{
+	u32 filter_reg = (ctr * 16) + IA_L2PMXEVFILTER_BASE;
+	u32 l2_orig_filter = L2PMXEVFILTER_SUFILTER_ALL |
+			     L2PMXEVFILTER_ORGFILTER_IDINDEP;
+	u32 filter_val = l2_orig_filter | 1 << (smp_processor_id() % 2);
+
+	set_l2_indirect_reg(filter_reg, filter_val);
+}
+
+static void hml2_pmu__set_evfilter_sys_mode(int ctr, int cpu, bool is_tracectr)
+{
+	u32 filter_reg = (ctr * IA_L2_REG_OFFSET) + IA_L2PMXEVFILTER_BASE;
+	u32 filter_val;
+	u32 l2_orig_filter = L2PMXEVFILTER_SUFILTER_ALL |
+			     L2PMXEVFILTER_ORGFILTER_IDINDEP;
+
+	if (is_tracectr)
+		filter_val = l2_orig_filter | 1 << (cpu % 2);
+	else
+		filter_val = l2_orig_filter | L2PMXEVFILTER_ORGFILTER_ALL;
+
+	set_l2_indirect_reg(filter_reg, filter_val);
+}
+
+static inline u32 hml2_pmu__getreset_ovsr(void)
+{
+	u32 result = get_l2_indirect_reg(L2PMOVSSET);
+
+	set_l2_indirect_reg(L2PMOVSCLR, result);
+	return result;
+}
+
+static inline bool hml2_pmu__has_overflowed(u32 ovsr)
+{
+	return !!(ovsr & l2_reset_mask);
+}
+
+static inline bool hml2_pmu__counter_has_overflowed(u32 ovsr, u32 idx)
+{
+	return !!(ovsr & idx_to_reg(idx));
+}
+
+static void l2_cache__event_update_from_slice(struct perf_event *event,
+					      struct hml2_pmu *slice)
+{
+	struct hw_perf_event *hwc = &event->hw;
+	u64 delta64, prev, now;
+	u32 delta;
+	u32 idx = hwc->idx;
+
+	do {
+		prev = atomic64_read(&slice->prev_count[idx]);
+		now = hml2_pmu__counter_get_value(idx);
+	} while (atomic64_cmpxchg(&slice->prev_count[idx], prev, now) != prev);
+
+	if (idx == l2_cycle_ctr_idx) {
+		/*
+		 * The cycle counter is 64-bit so needs separate handling
+		 * of 64-bit delta.
+		 */
+		delta64 = now - prev;
+		local64_add(delta64, &event->count);
+		local64_sub(delta64, &hwc->period_left);
+	} else {
+		/*
+		 * 32-bit counters need the unsigned 32-bit math to handle
+		 * overflow and now < prev
+		 */
+		delta = now - prev;
+		local64_add(delta, &event->count);
+		local64_sub(delta, &hwc->period_left);
+	}
+}
+
+static void l2_cache__slice_set_period(struct hml2_pmu *slice,
+				       struct hw_perf_event *hwc)
+{
+	u64 value = L2_MAX_PERIOD - (hwc->sample_period - 1);
+	u32 idx = hwc->idx;
+	u64 prev = atomic64_read(&slice->prev_count[idx]);
+
+	if (prev < value) {
+		value += prev;
+		atomic64_set(&slice->prev_count[idx], value);
+	} else {
+		value = prev;
+	}
+
+	hml2_pmu__counter_set_value(idx, value);
+}
+
+static int l2_cache__event_set_period(struct perf_event *event,
+				      struct hw_perf_event *hwc)
+{
+	struct l2cache_pmu *system = to_l2cache_pmu(event->pmu);
+	struct hml2_pmu *slice = get_hml2_pmu(system, event->cpu);
+	s64 left = local64_read(&hwc->period_left);
+	s64 period = hwc->sample_period;
+	int ret = 0;
+	u32 idx;
+
+	if (unlikely(left <= -period)) {
+		left = period;
+		local64_set(&hwc->period_left, left);
+		hwc->last_period = period;
+		ret = 1;
+	}
+
+	if (unlikely(left <= 0)) {
+		left += period;
+		local64_set(&hwc->period_left, left);
+		hwc->last_period = period;
+		ret = 1;
+	}
+
+	if (left > (s64)L2_MAX_PERIOD)
+		left = L2_MAX_PERIOD;
+
+	idx = hwc->idx;
+
+	atomic64_set(&slice->prev_count[idx], (u64)-left);
+	hml2_pmu__counter_set_value(idx, (u64)-left);
+	perf_event_update_userpage(event);
+
+	return ret;
+}
+
+static inline int event_to_bit(unsigned int group)
+{
+	/* Lower bits are reserved for use by the counters */
+	return group + MAX_L2_CTRS + 1;
+}
+
+static int l2_cache__get_event_idx(struct hml2_pmu *slice,
+				   struct perf_event *event)
+{
+	struct hw_perf_event *hwc = &event->hw;
+	int idx;
+	int bit;
+
+	if (hwc->config_base == L2CYCLE_CTR_RAW_CODE) {
+		if (test_and_set_bit(l2_cycle_ctr_idx, slice->used_mask))
+			return -EAGAIN;
+
+		return l2_cycle_ctr_idx;
+	}
+
+	if (L2_EVT_GROUP(hwc->config_base) > L2_EVT_GROUP_MAX)
+		return -EINVAL;
+
+	/* check for column exclusion */
+	bit = event_to_bit(L2_EVT_GROUP(hwc->config_base));
+	if (test_and_set_bit(bit, slice->used_mask)) {
+		pr_err("column exclusion for evt %lx\n", hwc->config_base);
+		event->state = PERF_EVENT_STATE_OFF;
+		return -EINVAL;
+	}
+
+	for (idx = 0; idx < l2cache_pmu.num_counters - 1; idx++) {
+		if (!test_and_set_bit(idx, slice->used_mask))
+			return idx;
+	}
+
+	/* The counters are all in use. */
+	clear_bit(bit, slice->used_mask);
+	return -EAGAIN;
+}
+
+static void l2_cache__clear_event_idx(struct hml2_pmu *slice,
+				      struct perf_event *event)
+{
+	int bit;
+	struct hw_perf_event *hwc = &event->hw;
+
+	bit = event_to_bit(L2_EVT_GROUP(hwc->config_base));
+	clear_bit(bit, slice->used_mask);
+}
+
+static void l2_cache__event_disable(struct perf_event *event)
+{
+	struct hw_perf_event *hwc = &event->hw;
+
+	if (!(hwc->state & PERF_HES_STOPPED)) {
+		hml2_pmu__counter_disable_interrupt(hwc->idx);
+		hml2_pmu__counter_disable(hwc->idx);
+	}
+}
+
+static irqreturn_t l2_cache__handle_irq(int irq_num, void *data)
+{
+	struct hml2_pmu *slice = data;
+	u32 ovsr;
+	int idx;
+	struct pt_regs *regs;
+
+	ovsr = hml2_pmu__getreset_ovsr();
+	if (!hml2_pmu__has_overflowed(ovsr))
+		return IRQ_NONE;
+
+	regs = get_irq_regs();
+
+	for (idx = 0; idx < l2cache_pmu.num_counters; idx++) {
+		struct perf_event *event = slice->events[idx];
+		struct hw_perf_event *hwc;
+		struct perf_sample_data data;
+
+		if (!event)
+			continue;
+
+		if (!hml2_pmu__counter_has_overflowed(ovsr, idx))
+			continue;
+
+		l2_cache__event_update_from_slice(event, slice);
+		hwc = &event->hw;
+
+		if (is_sampling_event(event)) {
+			perf_sample_data_init(&data, 0, hwc->last_period);
+			if (!l2_cache__event_set_period(event, hwc))
+				continue;
+			if (perf_event_overflow(event, &data, regs))
+				l2_cache__event_disable(event);
+		} else {
+			l2_cache__slice_set_period(slice, hwc);
+		}
+	}
+
+	/*
+	 * Handle the pending perf events.
+	 *
+	 * Note: this call *must* be run with interrupts disabled. For
+	 * platforms that can have the PMU interrupts raised as an NMI, this
+	 * will not work.
+	 */
+	irq_work_run();
+
+	return IRQ_HANDLED;
+}
+
+/*
+ * Implementation of abstract pmu functionality required by
+ * the core perf events code.
+ */
+
+static void l2_cache__pmu_enable(struct pmu *pmu)
+{
+	hml2_pmu__enable();
+}
+
+static void l2_cache__pmu_disable(struct pmu *pmu)
+{
+	hml2_pmu__disable();
+}
+
+static int l2_cache__event_init(struct perf_event *event)
+{
+	struct hw_perf_event *hwc = &event->hw;
+	struct hml2_pmu *slice;
+
+	if (event->attr.type != l2cache_pmu.pmu.type)
+		return -ENOENT;
+
+	/* We cannot filter accurately so we just don't allow it. */
+	if (event->attr.exclude_user || event->attr.exclude_kernel ||
+			event->attr.exclude_hv || event->attr.exclude_idle)
+		return -EINVAL;
+
+	hwc->idx = -1;
+	hwc->config_base = event->attr.config;
+
+	/*
+	 * Ensure all events are on the same cpu so all events are in the
+	 * same cpu context, to avoid races on pmu_enable etc.
+	 * For the first event, record its CPU in slice->on_cpu.
+	 * For subsequent events on that slice, force the event to that cpu.
+	 */
+	slice = get_hml2_pmu(to_l2cache_pmu(event->pmu), event->cpu);
+	if (slice->on_cpu == -1)
+		slice->on_cpu = event->cpu;
+	else
+		event->cpu = slice->on_cpu;
+
+	/*
+	 * For counting events use L2_CNT_PERIOD which allows for simplified
+	 * math and proper handling of overflows.
+	 */
+	if (hwc->sample_period == 0) {
+		hwc->sample_period = L2_CNT_PERIOD;
+		hwc->last_period   = hwc->sample_period;
+		local64_set(&hwc->period_left, hwc->sample_period);
+	}
+
+	return 0;
+}
+
+static void l2_cache__event_update(struct perf_event *event)
+{
+	struct l2cache_pmu *system = to_l2cache_pmu(event->pmu);
+	struct hml2_pmu *slice;
+	struct hw_perf_event *hwc = &event->hw;
+
+	if (hwc->idx == -1)
+		return;
+
+	slice = get_hml2_pmu(system, event->cpu);
+	if (likely(slice))
+		l2_cache__event_update_from_slice(event, slice);
+}
+
+static void l2_cache__event_start(struct perf_event *event, int flags)
+{
+	struct l2cache_pmu *system = to_l2cache_pmu(event->pmu);
+	struct hml2_pmu *slice;
+	struct hw_perf_event *hwc = &event->hw;
+	int idx = hwc->idx;
+	u32 config;
+	u32 evt_prefix, event_reg, event_cc, event_group;
+	bool is_tracectr = false;
+
+	if (idx < 0)
+		return;
+
+	hwc->state = 0;
+
+	slice = get_hml2_pmu(system, event->cpu);
+	if (unlikely(!slice))
+		return;
+	if (is_sampling_event(event))
+		l2_cache__event_set_period(event, hwc);
+	else
+		l2_cache__slice_set_period(slice, hwc);
+
+	if (hwc->config_base == L2CYCLE_CTR_RAW_CODE)
+		goto out;
+
+	config = hwc->config_base;
+	evt_prefix  = L2_EVT_PREFIX(config);
+	event_reg   = L2_EVT_REG(config);
+	event_cc    = L2_EVT_CODE(config);
+	event_group = L2_EVT_GROUP(config);
+
+	/* Check if user requested any special origin filtering. */
+	if (evt_prefix == L2_TRACECTR_PREFIX)
+		is_tracectr = true;
+
+	hml2_pmu__set_evcntcr(idx, 0x0);
+	hml2_pmu__set_evtyper(event_group, idx);
+	hml2_pmu__set_evres(slice, event_group, event_reg, event_cc);
+	if (event->cpu < 0)
+		hml2_pmu__set_evfilter_task_mode(idx);
+	else
+		hml2_pmu__set_evfilter_sys_mode(idx, event->cpu, is_tracectr);
+out:
+	hml2_pmu__counter_enable_interrupt(idx);
+	hml2_pmu__counter_enable(idx);
+}
+
+static void l2_cache__event_stop(struct perf_event *event, int flags)
+{
+	struct l2cache_pmu *system = to_l2cache_pmu(event->pmu);
+	struct hml2_pmu *slice;
+	struct hw_perf_event *hwc = &event->hw;
+	int idx = hwc->idx;
+
+	if (idx < 0)
+		return;
+
+	if (!(hwc->state & PERF_HES_STOPPED)) {
+		slice = get_hml2_pmu(system, event->cpu);
+		if (unlikely(!slice))
+			return;
+		hml2_pmu__counter_disable_interrupt(idx);
+		hml2_pmu__counter_disable(idx);
+
+		if (flags & PERF_EF_UPDATE)
+			l2_cache__event_update(event);
+		hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE;
+	}
+}
+
+/* Look for a duplicate event already configured on this cluster */
+static bool config_is_dup(struct hml2_pmu *slice, struct hw_perf_event *hwc)
+{
+	int i;
+	struct hw_perf_event *hwc_i;
+
+	for (i = 0; i < MAX_L2_CTRS; i++) {
+		if (slice->events[i] == NULL)
+			continue;
+		hwc_i = &slice->events[i]->hw;
+		if (hwc->config_base == hwc_i->config_base)
+			return true;
+	}
+	return false;
+}
+
+static int l2_cache__event_add(struct perf_event *event, int flags)
+{
+	struct l2cache_pmu *system = to_l2cache_pmu(event->pmu);
+	struct hw_perf_event *hwc = &event->hw;
+	int idx;
+	int err = 0;
+	struct hml2_pmu *slice;
+
+	slice = get_hml2_pmu(system, event->cpu);
+	if (!slice) {
+		event->state = PERF_EVENT_STATE_OFF;
+		hwc->idx = -1;
+		goto out;
+	}
+
+	/*
+	 * This checks for a duplicate event on the same cluster, which
+	 * typically occurs in non-sampling mode when using perf -a,
+	 * which generates events on each CPU. In this case, we don't
+	 * want to permanently disable the event by setting its state to
+	 * OFF, because if the other CPU is subsequently hotplugged, etc,
+	 * we want the opportunity to start collecting on this event.
+	 */
+	if (config_is_dup(slice, hwc)) {
+		hwc->idx = -1;
+		goto out;
+	}
+
+	idx = l2_cache__get_event_idx(slice, event);
+	if (idx < 0) {
+		err = idx;
+		goto out;
+	}
+
+	hwc->idx = idx;
+	hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE;
+	slice->events[idx] = event;
+	atomic64_set(&slice->prev_count[idx], 0ULL);
+
+	if (flags & PERF_EF_START)
+		l2_cache__event_start(event, flags);
+
+	/* Propagate changes to the userspace mapping. */
+	perf_event_update_userpage(event);
+
+out:
+	return err;
+}
+
+static void l2_cache__event_del(struct perf_event *event, int flags)
+{
+	struct l2cache_pmu *system = to_l2cache_pmu(event->pmu);
+	struct hw_perf_event *hwc = &event->hw;
+	struct hml2_pmu *slice;
+	int idx = hwc->idx;
+
+	if (idx < 0)
+		return;
+
+	slice = get_hml2_pmu(system, event->cpu);
+	if (unlikely(!slice))
+		return;
+	l2_cache__event_stop(event, flags | PERF_EF_UPDATE);
+	slice->events[idx] = NULL;
+	clear_bit(idx, slice->used_mask);
+	l2_cache__clear_event_idx(slice, event);
+
+	/* When last event is removed, clear slice cpu number */
+	if (find_first_bit(slice->used_mask, MAX_L2_CTRS) == MAX_L2_CTRS)
+		slice->on_cpu = -1;
+
+	perf_event_update_userpage(event);
+}
+
+static void l2_cache__event_read(struct perf_event *event)
+{
+	l2_cache__event_update(event);
+}
+
+/* NRCCG format for perf RAW codes. */
+PMU_FORMAT_ATTR(l2_prefix, "config:16-19");
+PMU_FORMAT_ATTR(l2_reg,    "config:12-15");
+PMU_FORMAT_ATTR(l2_code,   "config:4-11");
+PMU_FORMAT_ATTR(l2_grp,    "config:0-3");
+static struct attribute *l2_cache_pmu_formats[] = {
+	&format_attr_l2_prefix.attr,
+	&format_attr_l2_reg.attr,
+	&format_attr_l2_code.attr,
+	&format_attr_l2_grp.attr,
+	NULL,
+};
+
+static struct attribute_group l2_cache_pmu_format_group = {
+	.name = "format",
+	.attrs = l2_cache_pmu_formats,
+};
+
+static const struct attribute_group *l2_cache_pmu_attr_grps[] = {
+	&l2_cache_pmu_format_group,
+	NULL,
+};
+
+/*
+ * Generic device handlers
+ */
+
+static const struct acpi_device_id l2_cache_pmu_acpi_match[] = {
+	{ "QCOM8130", },
+	{ }
+};
+
+static int get_num_counters(void)
+{
+	int val;
+
+	val = get_l2_indirect_reg(L2PMCR);
+
+	/*
+	 * Read bits 15:11 of the L2PMCR and add 1
+	 * for the cycle counter.
+	 */
+	return ((val >> PMCR_NUM_EV_SHIFT) & PMCR_NUM_EV_MASK) + 1;
+}
+
+static inline int mpidr_to_cpu(u32 mpidr)
+{
+	return MPIDR_AFFINITY_LEVEL(mpidr, 0) |
+		(MPIDR_AFFINITY_LEVEL(mpidr, 1) << mpidr_affl1_shift);
+}
+
+static int get_logical_cpu_index(int phys_cpu)
+{
+	int cpu;
+
+	for (cpu = 0; cpu < nr_cpu_ids; cpu++)
+		if (mpidr_to_cpu(cpu_logical_map(cpu)) == phys_cpu)
+			return cpu;
+	return -EINVAL;
+}
+
+static int l2_cache_pmu_probe_slice(struct device *dev, void *data)
+{
+	struct platform_device *pdev = to_platform_device(dev->parent);
+	struct platform_device *sdev = to_platform_device(dev);
+	struct l2cache_pmu *l2cache_pmu = data;
+	struct hml2_pmu *slice;
+	struct cpumask affinity_mask;
+	struct acpi_device *device;
+	int irq, err;
+	int phys_cpu, logical_cpu;
+	int i;
+	unsigned long instance;
+
+	if (acpi_bus_get_device(ACPI_HANDLE(dev), &device))
+		return -ENODEV;
+
+	if (kstrtol(device->pnp.unique_id, 10, &instance) < 0) {
+		pr_err("unable to read ACPI uid\n");
+		return -ENODEV;
+	}
+
+	irq = platform_get_irq(sdev, 0);
+	if (irq < 0) {
+		dev_err(&pdev->dev,
+			"Failed to get valid irq for slice %ld\n", instance);
+		return irq;
+	}
+
+	slice = devm_kzalloc(&pdev->dev, sizeof(*slice), GFP_KERNEL);
+	if (!slice)
+		return -ENOMEM;
+
+	cpumask_clear(&affinity_mask);
+	slice->on_cpu = -1;
+
+	for (i = 0; i < num_cpus_in_cluster; i++) {
+		phys_cpu = instance * num_cpus_in_cluster + i;
+		logical_cpu = get_logical_cpu_index(phys_cpu);
+		if (logical_cpu >= 0) {
+			slice->cpu[slice->valid_cpus++] = logical_cpu;
+			per_cpu(cpu_to_pmu, logical_cpu) = slice;
+			cpumask_set_cpu(logical_cpu, &affinity_mask);
+		}
+	}
+
+	if (slice->valid_cpus == 0) {
+		dev_err(&pdev->dev, "No CPUs found for L2 cache instance %ld\n",
+			instance);
+		return -ENODEV;
+	}
+
+	if (irq_set_affinity(irq, &affinity_mask)) {
+		dev_err(&pdev->dev,
+			"Unable to set irq affinity (irq=%d, first cpu=%d)\n",
+			irq, slice->cpu[0]);
+		return -ENODEV;
+	}
+
+	err = devm_request_irq(
+		&pdev->dev, irq, l2_cache__handle_irq,
+		IRQF_NOBALANCING, "l2-cache-pmu", slice);
+	if (err) {
+		dev_err(&pdev->dev,
+			"Unable to request IRQ%d for L2 PMU counters\n",
+			irq);
+		return err;
+	}
+
+	dev_info(&pdev->dev, "Registered L2 cache PMU instance %ld with %d CPUs\n",
+		 instance, slice->valid_cpus);
+
+	slice->pmu_lock = __SPIN_LOCK_UNLOCKED(slice->pmu_lock);
+
+	hml2_pmu__init(slice);
+	list_add(&slice->entry, &l2cache_pmu->pmus);
+	l2cache_pmu->num_pmus++;
+
+	return 0;
+}
+
+static int l2_cache_pmu_probe(struct platform_device *pdev)
+{
+	int result;
+	int err;
+
+	INIT_LIST_HEAD(&l2cache_pmu.pmus);
+
+	l2cache_pmu.pmu = (struct pmu) {
+		.name		= "l2cache",
+		.pmu_enable	= l2_cache__pmu_enable,
+		.pmu_disable	= l2_cache__pmu_disable,
+		.event_init	= l2_cache__event_init,
+		.add		= l2_cache__event_add,
+		.del		= l2_cache__event_del,
+		.start		= l2_cache__event_start,
+		.stop		= l2_cache__event_stop,
+		.read		= l2_cache__event_read,
+		.attr_groups	= l2_cache_pmu_attr_grps,
+	};
+
+	l2cache_pmu.num_counters = get_num_counters();
+	l2_cycle_ctr_idx = l2cache_pmu.num_counters - 1;
+	l2_reset_mask = ((1 << (l2cache_pmu.num_counters - 1)) - 1) |
+		L2PM_CC_ENABLE;
+
+	err = device_property_read_u32(&pdev->dev, "qcom,num-cpus-in-cluster",
+				       &num_cpus_in_cluster);
+	if (err) {
+		dev_err(&pdev->dev, "Failed to read number of cpus in cluster\n");
+		return err;
+	}
+	mpidr_affl1_shift = hweight8(num_cpus_in_cluster - 1);
+
+	/* Read slice info and initialize each slice */
+	result = device_for_each_child(&pdev->dev, &l2cache_pmu,
+				       l2_cache_pmu_probe_slice);
+
+	if (result < 0)
+		return -ENODEV;
+
+	if (l2cache_pmu.num_pmus == 0) {
+		dev_err(&pdev->dev, "No hardware L2 PMUs found\n");
+		return -ENODEV;
+	}
+
+	result = perf_pmu_register(&l2cache_pmu.pmu,
+				   l2cache_pmu.pmu.name, -1);
+
+	if (result < 0)
+		dev_err(&pdev->dev,
+			"Failed to register L2 cache PMU (%d)\n",
+			result);
+	else
+		dev_info(&pdev->dev,
+			 "Registered L2 cache PMU using %d HW PMUs\n",
+			 l2cache_pmu.num_pmus);
+
+	return result;
+}
+
+static int l2_cache_pmu_remove(struct platform_device *pdev)
+{
+	perf_pmu_unregister(&l2cache_pmu.pmu);
+	return 0;
+}
+
+static struct platform_driver l2_cache_pmu_driver = {
+	.driver = {
+		.name = "qcom-l2cache-pmu",
+		.owner = THIS_MODULE,
+		.acpi_match_table = ACPI_PTR(l2_cache_pmu_acpi_match),
+	},
+	.probe = l2_cache_pmu_probe,
+	.remove = l2_cache_pmu_remove,
+};
+
+static int __init register_l2_cache_pmu_driver(void)
+{
+	return platform_driver_register(&l2_cache_pmu_driver);
+}
+device_initcall(register_l2_cache_pmu_driver);
diff --git a/include/linux/soc/qcom/perf_event_l2.h b/include/linux/soc/qcom/perf_event_l2.h
new file mode 100644
index 0000000..2a35c9d
--- /dev/null
+++ b/include/linux/soc/qcom/perf_event_l2.h
@@ -0,0 +1,82 @@
+/*
+ * Copyright (c) 2015, 2016 The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __QCOM_PERF_EVENT_L2_H
+#define __QCOM_PERF_EVENT_L2_H
+
+#define MAX_L2_CTRS             17
+#define MAX_L2_EVENTS           32
+
+#define PMCR_NUM_EV_SHIFT       11
+#define PMCR_NUM_EV_MASK        0x1f
+
+#define L2PMCR                  0x400
+#define L2PMCNTENCLR            0x403
+#define L2PMCNTENSET            0x404
+#define L2PMINTENCLR            0x405
+#define L2PMINTENSET            0x406
+#define L2PMOVSCLR              0x407
+#define L2PMOVSSET              0x408
+#define L2PMCCNTCR              0x409
+#define L2PMCCNTR               0x40A
+#define L2PMCCNTSR              0x40C
+#define L2PMRESR                0x410
+#define IA_L2PMXEVCNTCR_BASE    0x420
+#define IA_L2PMXEVCNTR_BASE     0x421
+#define IA_L2PMXEVFILTER_BASE   0x423
+#define IA_L2PMXEVTYPER_BASE    0x424
+
+#define IA_L2_REG_OFFSET        0x10
+
+#define L2PMXEVFILTER_SUFILTER_ALL      0x000E0000
+#define L2PMXEVFILTER_ORGFILTER_IDINDEP 0x00000004
+#define L2PMXEVFILTER_ORGFILTER_ALL     0x00000003
+
+#define L2PM_CC_ENABLE          0x80000000
+
+#define L2EVTYPER_REG_SHIFT     3
+
+#define L2CYCLE_CTR_BIT         31
+#define L2CYCLE_CTR_RAW_CODE    0xFE
+
+#define L2PMCR_RESET_ALL        0x6
+#define L2PMCR_GLOBAL_ENABLE    0x1
+#define L2PMCR_GLOBAL_DISABLE   0x0
+
+#define L2PMRESR_EN             ((u64)1 << 63)
+
+#define L2_EVT_MASK             0xFFFFF
+#define L2_EVT_PFX_MASK         0xF0000
+#define L2_EVT_REG_MASK         0x0F000
+#define L2_EVT_CODE_MASK        0x00FF0
+#define L2_EVT_GROUP_MASK       0x0000F
+#define L2_EVT_REG_GROUP_MASK   (L2_EVT_REG_MASK | L2_EVT_GROUP_MASK)
+#define L2_EVT_PFX_SHIFT        16
+#define L2_EVT_REG_SHIFT        12
+#define L2_EVT_CODE_SHIFT        4
+#define L2_EVT_GROUP_SHIFT       0
+#define L2_EVT_PREFIX(event) ((event & L2_EVT_PFX_MASK) >> L2_EVT_PFX_SHIFT)
+#define L2_EVT_REG(event)    ((event & L2_EVT_REG_MASK) >> L2_EVT_REG_SHIFT)
+#define L2_EVT_CODE(event)   ((event & L2_EVT_CODE_MASK) >> L2_EVT_CODE_SHIFT)
+#define L2_EVT_GROUP(event)  ((event & L2_EVT_GROUP_MASK) >> L2_EVT_GROUP_SHIFT)
+
+#define L2_EVT_GROUP_MAX         7
+
+#define L2_TRACECTR_PREFIX       5
+
+#define L2_MAX_PERIOD           U32_MAX
+#define L2_CNT_PERIOD           (U32_MAX - 0x07FFFFFF)
+
+#define MAX_CPUS_IN_CLUSTER     16
+
+#endif
-- 
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project.

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] perf: allow add to change event state
  2016-06-03 21:03 ` [PATCH 1/2] perf: allow add to change event state Neil Leeder
@ 2016-06-03 21:38   ` Peter Zijlstra
  0 siblings, 0 replies; 13+ messages in thread
From: Peter Zijlstra @ 2016-06-03 21:38 UTC (permalink / raw)
  To: Neil Leeder
  Cc: Catalin Marinas, WillDeacon, Mark Rutland, Ingo Molnar,
	Arnaldo Carvalho de Melo, linux-arm-msm, linux-kernel,
	linux-arm-kernel, Mark Langsdorf, Mark Salter, Jon Masters,
	Timur Tabi, cov

On Fri, Jun 03, 2016 at 05:03:31PM -0400, Neil Leeder wrote:
> When the platform-specific pmu->add function returns
> an error, it may have also changed the event's state.
> If so, do not override that new state.

This is inadequate; it fails to what the problem is and why this is a
good solution.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/2] qcom: add l2 cache perf events driver
  2016-06-03 21:03 [PATCH 0/2] qcom: add l2 cache perf events driver Neil Leeder
  2016-06-03 21:03 ` [PATCH 1/2] perf: allow add to change event state Neil Leeder
  2016-06-03 21:03 ` [PATCH 2/2] soc: qcom: add l2 cache perf events driver Neil Leeder
@ 2016-06-06  9:04 ` Mark Rutland
  2016-06-08 15:21   ` Neil Leeder
  2 siblings, 1 reply; 13+ messages in thread
From: Mark Rutland @ 2016-06-06  9:04 UTC (permalink / raw)
  To: Neil Leeder
  Cc: Catalin Marinas, WillDeacon, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, linux-arm-msm, linux-kernel,
	linux-arm-kernel, Mark Langsdorf, Mark Salter, Jon Masters,
	Timur Tabi, cov

On Fri, Jun 03, 2016 at 05:03:30PM -0400, Neil Leeder wrote:
> This adds a new dynamic PMU to the Perf Events framework to program
> and control the L2 cache PMUs in some Qualcomm Technologies SOCs.
> 
> The driver exports formatting and event information to sysfs so it can
> be used by the perf user space tools with the syntax:
> perf stat -e l2cache/event=0x42/
> 
> One point to note is that there are certain combinations of events
> which are invalid, and which are detected in event_add().

Which combinations of events are invalid?

Please elaborate.

> Simply having event_add() fail would result in event_sched_in() making
> it Inactive, treating it as over-allocation of counters, leading to
> repeated attempts to allocate the events and ending up with a
> statistical count.  A solution for this situation is to turn the
> conflicting event off in event_add(). This allows a single error
> message to be generated, and no recurring attempts to re-add the
> invalid event. In order for this to work, event_sched_in()
> needs to detect that event_add() changed the state, and not override it
> and force it to Inactive.

For heterogeneous PMUs, we added the pmu::filter_match(event) callback
for a similar purpose: preventing an event from being scheduled on a
core which does not support that event, while allowing other events to
be scheduled.

So if you truly need to filter events, the infrastructure for doing so
already exists.

However, you will need to elaborate on "there are certain combinations
of events which are invalid".

> This patchset requires:
> [PATCH] soc: qcom: provide mechanism for drivers to access L2 registers

A link would be remarkably helpful.

Better would be to fold that patch into this series, as it's the only
user, and both are helpful review context for the other.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] soc: qcom: add l2 cache perf events driver
  2016-06-03 21:03 ` [PATCH 2/2] soc: qcom: add l2 cache perf events driver Neil Leeder
@ 2016-06-06  9:51   ` Mark Rutland
  2016-06-08 15:16     ` Neil Leeder
  0 siblings, 1 reply; 13+ messages in thread
From: Mark Rutland @ 2016-06-06  9:51 UTC (permalink / raw)
  To: Neil Leeder
  Cc: Catalin Marinas, WillDeacon, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, linux-arm-msm, linux-kernel,
	linux-arm-kernel, Mark Langsdorf, Mark Salter, Jon Masters,
	Timur Tabi, cov

On Fri, Jun 03, 2016 at 05:03:32PM -0400, Neil Leeder wrote:
> Adds perf events support for L2 cache PMU.
> 
> The L2 cache PMU driver is named 'l2cache' and can be used
> with perf events to profile L2 events such as cache hits
> and misses.
> 
> Signed-off-by: Neil Leeder <nleeder@codeaurora.org>
> ---
>  drivers/soc/qcom/Kconfig               |  10 +
>  drivers/soc/qcom/Makefile              |   1 +
>  drivers/soc/qcom/perf_event_l2.c       | 917 +++++++++++++++++++++++++++++++++
>  include/linux/soc/qcom/perf_event_l2.h |  82 +++
>  4 files changed, 1010 insertions(+)
>  create mode 100644 drivers/soc/qcom/perf_event_l2.c
>  create mode 100644 include/linux/soc/qcom/perf_event_l2.h
> 
> diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
> index 21ec616..0b5ddb9 100644
> --- a/drivers/soc/qcom/Kconfig
> +++ b/drivers/soc/qcom/Kconfig
> @@ -19,6 +19,16 @@ config QCOM_L2_ACCESSORS
>  	  Provides support for accessing registers in the L2 cache
>  	  for Qualcomm Technologies chips.
>  
> +config QCOM_PERF_EVENTS_L2
> +	bool "Qualcomm Technologies L2-cache perf events"
> +	depends on ARCH_QCOM && HW_PERF_EVENTS && ACPI
> +	select QCOM_L2_ACCESSORS
> +	  help
> +	  Provides support for the L2 cache performance monitor unit (PMU)
> +	  in Qualcomm Technologies processors.
> +	  Adds the L2 cache PMU into the perf events subsystem for
> +	  monitoring L2 cache events.
> +
>  config QCOM_PM
>  	bool "Qualcomm Power Management"
>  	depends on ARCH_QCOM && !ARM64
> diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile
> index 6ef29b9..c8e89ca9 100644
> --- a/drivers/soc/qcom/Makefile
> +++ b/drivers/soc/qcom/Makefile
> @@ -1,5 +1,6 @@
>  obj-$(CONFIG_QCOM_GSBI)	+=	qcom_gsbi.o
>  obj-$(CONFIG_QCOM_L2_ACCESSORS) += l2-accessors.o
> +obj-$(CONFIG_QCOM_PERF_EVENTS_L2)	+= perf_event_l2.o
>  obj-$(CONFIG_QCOM_PM)	+=	spm.o
>  obj-$(CONFIG_QCOM_SMD) +=	smd.o
>  obj-$(CONFIG_QCOM_SMD_RPM)	+= smd-rpm.o
> diff --git a/drivers/soc/qcom/perf_event_l2.c b/drivers/soc/qcom/perf_event_l2.c
> new file mode 100644
> index 0000000..2485b9e
> --- /dev/null
> +++ b/drivers/soc/qcom/perf_event_l2.c
> @@ -0,0 +1,917 @@
> +/* Copyright (c) 2015,2016 The Linux Foundation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 and
> + * only version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +#define pr_fmt(fmt) "l2 perfevents: " fmt
> +
> +#include <linux/module.h>
> +#include <linux/bitops.h>
> +#include <linux/interrupt.h>
> +#include <linux/io.h>
> +#include <linux/irq.h>
> +#include <linux/list.h>
> +#include <linux/acpi.h>
> +#include <linux/perf_event.h>
> +#include <linux/platform_device.h>
> +#include <linux/soc/qcom/perf_event_l2.h>
> +#include <linux/soc/qcom/l2-accessors.h>
> +
> +/*
> + * The cache is made-up of one or more slices, each slice has its own PMU.
> + * This structure represents one of the hardware PMUs.
> + */

I take it each slice PMU is shared by several CPUs? i.e. there aren't
per-cpu slice PMU counters.

> +struct hml2_pmu {
> +	struct list_head entry;
> +	struct perf_event *events[MAX_L2_CTRS];
> +	unsigned long used_mask[BITS_TO_LONGS(MAX_L2_EVENTS)];

What's the difference between MAX_L2_CTRS and MAX_L2_EVENTS?

I'm surprised that they are different. What precisely do either
represent?

Surely you don't have different events per-slice? Why do you need the
PMU pointers at the slice level?

> +	unsigned int valid_cpus;
> +	int on_cpu;
> +	u8 cpu[MAX_CPUS_IN_CLUSTER];

These all look suspicious to me (potentially barring on_cpu)

Surely this is an uncore PMU? It represents a shared resource, with
shared counters, so it should be.

If you need to encode a set of CPUs, use a cpumask.

> +	atomic64_t prev_count[MAX_L2_CTRS];
> +	spinlock_t pmu_lock;
> +};
> +
> +/*
> + * Aggregate PMU. Implements the core pmu functions and manages
> + * the hardware PMUs.
> + */
> +struct l2cache_pmu {
> +	u32 num_pmus;
> +	struct list_head pmus;
> +	struct pmu pmu;
> +	int num_counters;
> +};
> +
> +#define to_l2cache_pmu(p) (container_of(p, struct l2cache_pmu, pmu))
> +
> +static DEFINE_PER_CPU(struct hml2_pmu *, cpu_to_pmu);
> +static struct l2cache_pmu l2cache_pmu = { 0 };
> +static int num_cpus_in_cluster;
> +
> +static u32 l2_cycle_ctr_idx;
> +static u32 l2_reset_mask;
> +static u32 mpidr_affl1_shift;

Eww. Drivers really shouldn't be messing with the MPIDR. The precise
values are bound to change between generations of SoCs leaving us with a
mess.

The FW should tell us precisely which CPUs device are affine to.

> +static inline u32 idx_to_reg(u32 idx)
> +{
> +	u32 bit;
> +
> +	if (idx == l2_cycle_ctr_idx)
> +		bit = BIT(L2CYCLE_CTR_BIT);
> +	else
> +		bit = BIT(idx);
> +	return bit;
> +}

Probably worth giving a _bit suffix on this function. That makes it less
confusing when it's used later.

[...]

> +static inline void hml2_pmu__enable(void)
> +{
> +	/* Ensure all programming commands are done before proceeding */
> +	wmb();
> +	set_l2_indirect_reg(L2PMCR, L2PMCR_GLOBAL_ENABLE);
> +}
> +
> +static inline void hml2_pmu__disable(void)
> +{
> +	set_l2_indirect_reg(L2PMCR, L2PMCR_GLOBAL_DISABLE);
> +	/* Ensure the basic counter unit is stopped before proceeding */
> +	wmb();
> +}

What does set_l2_indirect_reg do? IIRC it does system register accesses,
so I don't see why wmb() (A.K.A dsb(st)) would ensure completion of
those. So what's going on in hml2_pmu__disable()?

What exactly are you trying to order here?

> +static inline void hml2_pmu__counter_set_value(u32 idx, u64 value)
> +{
> +	u32 counter_reg;
> +
> +	if (idx == l2_cycle_ctr_idx) {
> +		set_l2_indirect_reg(L2PMCCNTR, value);
> +	} else {
> +		counter_reg = (idx * 16) + IA_L2PMXEVCNTR_BASE;
> +		set_l2_indirect_reg(counter_reg, (u32)(value & 0xFFFFFFFF));
> +	}
> +}
> +
> +static inline u64 hml2_pmu__counter_get_value(u32 idx)
> +{
> +	u64 value;
> +	u32 counter_reg;
> +
> +	if (idx == l2_cycle_ctr_idx) {
> +		value = get_l2_indirect_reg(L2PMCCNTR);
> +	} else {
> +		counter_reg = (idx * 16) + IA_L2PMXEVCNTR_BASE;
> +		value = get_l2_indirect_reg(counter_reg);
> +	}
> +
> +	return value;
> +}

It would be good to explain the magic 16 for these two (ideally using
some well-named mnemonic).

[...]

> +static inline int event_to_bit(unsigned int group)
> +{
> +	/* Lower bits are reserved for use by the counters */
> +	return group + MAX_L2_CTRS + 1;
> +}

What exactly is a group in this context? Why is this not group_to_bit?

> +static int l2_cache__get_event_idx(struct hml2_pmu *slice,
> +				   struct perf_event *event)
> +{
> +	struct hw_perf_event *hwc = &event->hw;
> +	int idx;
> +	int bit;
> +
> +	if (hwc->config_base == L2CYCLE_CTR_RAW_CODE) {
> +		if (test_and_set_bit(l2_cycle_ctr_idx, slice->used_mask))
> +			return -EAGAIN;
> +
> +		return l2_cycle_ctr_idx;
> +	}
> +
> +	if (L2_EVT_GROUP(hwc->config_base) > L2_EVT_GROUP_MAX)
> +		return -EINVAL;

This doesn't look right. L2_EVT_GROUP() masks out and shifts out bits,
so I can pass an invalid value and it will be silently coerced to some
valid value.

> +
> +	/* check for column exclusion */
> +	bit = event_to_bit(L2_EVT_GROUP(hwc->config_base));
> +	if (test_and_set_bit(bit, slice->used_mask)) {
> +		pr_err("column exclusion for evt %lx\n", hwc->config_base);
> +		event->state = PERF_EVENT_STATE_OFF;
> +		return -EINVAL;
> +	}
> +
> +	for (idx = 0; idx < l2cache_pmu.num_counters - 1; idx++) {
> +		if (!test_and_set_bit(idx, slice->used_mask))
> +			return idx;
> +	}
> +
> +	/* The counters are all in use. */
> +	clear_bit(bit, slice->used_mask);
> +	return -EAGAIN;
> +}

[...]

> +static irqreturn_t l2_cache__handle_irq(int irq_num, void *data)
> +{
> +	struct hml2_pmu *slice = data;
> +	u32 ovsr;
> +	int idx;
> +	struct pt_regs *regs;
> +
> +	ovsr = hml2_pmu__getreset_ovsr();
> +	if (!hml2_pmu__has_overflowed(ovsr))
> +		return IRQ_NONE;
> +
> +	regs = get_irq_regs();
> +
> +	for (idx = 0; idx < l2cache_pmu.num_counters; idx++) {
> +		struct perf_event *event = slice->events[idx];
> +		struct hw_perf_event *hwc;
> +		struct perf_sample_data data;
> +
> +		if (!event)
> +			continue;
> +
> +		if (!hml2_pmu__counter_has_overflowed(ovsr, idx))
> +			continue;
> +
> +		l2_cache__event_update_from_slice(event, slice);
> +		hwc = &event->hw;
> +
> +		if (is_sampling_event(event)) {
> +			perf_sample_data_init(&data, 0, hwc->last_period);

I don't think sampling makes sense, given this is an uncore PMU and the
events are triggered by other CPUs.

> +			if (!l2_cache__event_set_period(event, hwc))
> +				continue;
> +			if (perf_event_overflow(event, &data, regs))
> +				l2_cache__event_disable(event);
> +		} else {
> +			l2_cache__slice_set_period(slice, hwc);
> +		}
> +	}
> +
> +	/*
> +	 * Handle the pending perf events.
> +	 *
> +	 * Note: this call *must* be run with interrupts disabled. For
> +	 * platforms that can have the PMU interrupts raised as an NMI, this
> +	 * will not work.
> +	 */
> +	irq_work_run();
> +
> +	return IRQ_HANDLED;
> +}

[...]

> +static int l2_cache__event_init(struct perf_event *event)
> +{
> +	struct hw_perf_event *hwc = &event->hw;
> +	struct hml2_pmu *slice;
> +
> +	if (event->attr.type != l2cache_pmu.pmu.type)
> +		return -ENOENT;
> +
> +	/* We cannot filter accurately so we just don't allow it. */
> +	if (event->attr.exclude_user || event->attr.exclude_kernel ||
> +			event->attr.exclude_hv || event->attr.exclude_idle)
> +		return -EINVAL;
> +

Please also check grouping. For instance, you definitely don't support
event->pmu != event->group_leader->pmu, modulo so weirdness with
software events. See drivers/bus/arm-ccn.c.

Do you support groups of events at all?

If so, you must validate that the groups are also schedulable.

> +	hwc->idx = -1;
> +	hwc->config_base = event->attr.config;
> +
> +	/*
> +	 * Ensure all events are on the same cpu so all events are in the
> +	 * same cpu context, to avoid races on pmu_enable etc.
> +	 * For the first event, record its CPU in slice->on_cpu.
> +	 * For subsequent events on that slice, force the event to that cpu.
> +	 */
> +	slice = get_hml2_pmu(to_l2cache_pmu(event->pmu), event->cpu);
> +	if (slice->on_cpu == -1)
> +		slice->on_cpu = event->cpu;
> +	else
> +		event->cpu = slice->on_cpu;

Please just do the usual thing for uncore PMU CPU handling. Take a look
at the arm-ccn driver.

> +	/*
> +	 * For counting events use L2_CNT_PERIOD which allows for simplified
> +	 * math and proper handling of overflows.
> +	 */
> +	if (hwc->sample_period == 0) {
> +		hwc->sample_period = L2_CNT_PERIOD;
> +		hwc->last_period   = hwc->sample_period;
> +		local64_set(&hwc->period_left, hwc->sample_period);
> +	}
> +
> +	return 0;
> +}

Huh? You haven't validated the event. Please validate the config is a
real, countable event that you support before assuming success.

> +static void l2_cache__event_update(struct perf_event *event)
> +{
> +	struct l2cache_pmu *system = to_l2cache_pmu(event->pmu);
> +	struct hml2_pmu *slice;
> +	struct hw_perf_event *hwc = &event->hw;
> +
> +	if (hwc->idx == -1)
> +		return;
> +
> +	slice = get_hml2_pmu(system, event->cpu);
> +	if (likely(slice))
> +		l2_cache__event_update_from_slice(event, slice);
> +}

When is slice NULL?

[...]

> +static int l2_cache__event_add(struct perf_event *event, int flags)
> +{
> +	struct l2cache_pmu *system = to_l2cache_pmu(event->pmu);
> +	struct hw_perf_event *hwc = &event->hw;
> +	int idx;
> +	int err = 0;
> +	struct hml2_pmu *slice;
> +
> +	slice = get_hml2_pmu(system, event->cpu);
> +	if (!slice) {
> +		event->state = PERF_EVENT_STATE_OFF;
> +		hwc->idx = -1;
> +		goto out;
> +	}
> +
> +	/*
> +	 * This checks for a duplicate event on the same cluster, which
> +	 * typically occurs in non-sampling mode when using perf -a,
> +	 * which generates events on each CPU. In this case, we don't
> +	 * want to permanently disable the event by setting its state to
> +	 * OFF, because if the other CPU is subsequently hotplugged, etc,
> +	 * we want the opportunity to start collecting on this event.
> +	 */
> +	if (config_is_dup(slice, hwc)) {
> +		hwc->idx = -1;
> +		goto out;
> +	}

Eww. This indicates you're just hacking around userspace.

Make this a uncore PMU, and expose a cpumask. See the arm-ccn driver.

> +
> +	idx = l2_cache__get_event_idx(slice, event);
> +	if (idx < 0) {
> +		err = idx;
> +		goto out;
> +	}
> +
> +	hwc->idx = idx;
> +	hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE;
> +	slice->events[idx] = event;
> +	atomic64_set(&slice->prev_count[idx], 0ULL);
> +
> +	if (flags & PERF_EF_START)
> +		l2_cache__event_start(event, flags);
> +
> +	/* Propagate changes to the userspace mapping. */
> +	perf_event_update_userpage(event);
> +
> +out:
> +	return err;
> +}

[...]

> +static inline int mpidr_to_cpu(u32 mpidr)
> +{
> +	return MPIDR_AFFINITY_LEVEL(mpidr, 0) |
> +		(MPIDR_AFFINITY_LEVEL(mpidr, 1) << mpidr_affl1_shift);
> +}

I don't follow the logic here.

What exactly are you trying to get? What space does the result exist in?
It's neither the Linux logical ID nor the physical ID.

> +
> +static int get_logical_cpu_index(int phys_cpu)
> +{
> +	int cpu;
> +
> +	for (cpu = 0; cpu < nr_cpu_ids; cpu++)
> +		if (mpidr_to_cpu(cpu_logical_map(cpu)) == phys_cpu)
> +			return cpu;
> +	return -EINVAL;
> +}

As above, I really don't follow this.

What exactly is phys_cpu?

> +static int l2_cache_pmu_probe_slice(struct device *dev, void *data)
> +{
> +	struct platform_device *pdev = to_platform_device(dev->parent);
> +	struct platform_device *sdev = to_platform_device(dev);
> +	struct l2cache_pmu *l2cache_pmu = data;
> +	struct hml2_pmu *slice;
> +	struct cpumask affinity_mask;
> +	struct acpi_device *device;
> +	int irq, err;
> +	int phys_cpu, logical_cpu;
> +	int i;
> +	unsigned long instance;
> +
> +	if (acpi_bus_get_device(ACPI_HANDLE(dev), &device))
> +		return -ENODEV;
> +
> +	if (kstrtol(device->pnp.unique_id, 10, &instance) < 0) {
> +		pr_err("unable to read ACPI uid\n");
> +		return -ENODEV;
> +	}
> +
> +	irq = platform_get_irq(sdev, 0);
> +	if (irq < 0) {
> +		dev_err(&pdev->dev,
> +			"Failed to get valid irq for slice %ld\n", instance);
> +		return irq;
> +	}
> +
> +	slice = devm_kzalloc(&pdev->dev, sizeof(*slice), GFP_KERNEL);
> +	if (!slice)
> +		return -ENOMEM;
> +
> +	cpumask_clear(&affinity_mask);
> +	slice->on_cpu = -1;
> +
> +	for (i = 0; i < num_cpus_in_cluster; i++) {
> +		phys_cpu = instance * num_cpus_in_cluster + i;
> +		logical_cpu = get_logical_cpu_index(phys_cpu);
> +		if (logical_cpu >= 0) {
> +			slice->cpu[slice->valid_cpus++] = logical_cpu;
> +			per_cpu(cpu_to_pmu, logical_cpu) = slice;
> +			cpumask_set_cpu(logical_cpu, &affinity_mask);
> +		}
> +	}

Please, get a better, explicit, description of CPU affinity, rather than
depending on a fragile unique ID and an arbitrary MPIDR decomposition
scheme.

> +
> +	if (slice->valid_cpus == 0) {
> +		dev_err(&pdev->dev, "No CPUs found for L2 cache instance %ld\n",
> +			instance);
> +		return -ENODEV;
> +	}
> +
> +	if (irq_set_affinity(irq, &affinity_mask)) {
> +		dev_err(&pdev->dev,
> +			"Unable to set irq affinity (irq=%d, first cpu=%d)\n",
> +			irq, slice->cpu[0]);
> +		return -ENODEV;
> +	}

I didn't spot any CPU notifier. Consider hotplug and the automigration
of IRQs.

> +
> +	err = devm_request_irq(
> +		&pdev->dev, irq, l2_cache__handle_irq,
> +		IRQF_NOBALANCING, "l2-cache-pmu", slice);
> +	if (err) {
> +		dev_err(&pdev->dev,
> +			"Unable to request IRQ%d for L2 PMU counters\n",
> +			irq);
> +		return err;
> +	}
> +
> +	dev_info(&pdev->dev, "Registered L2 cache PMU instance %ld with %d CPUs\n",
> +		 instance, slice->valid_cpus);
> +
> +	slice->pmu_lock = __SPIN_LOCK_UNLOCKED(slice->pmu_lock);
> +
> +	hml2_pmu__init(slice);
> +	list_add(&slice->entry, &l2cache_pmu->pmus);
> +	l2cache_pmu->num_pmus++;
> +
> +	return 0;
> +}
> +
> +static int l2_cache_pmu_probe(struct platform_device *pdev)
> +{
> +	int result;
> +	int err;
> +
> +	INIT_LIST_HEAD(&l2cache_pmu.pmus);
> +
> +	l2cache_pmu.pmu = (struct pmu) {
> +		.name		= "l2cache",
> +		.pmu_enable	= l2_cache__pmu_enable,
> +		.pmu_disable	= l2_cache__pmu_disable,
> +		.event_init	= l2_cache__event_init,
> +		.add		= l2_cache__event_add,
> +		.del		= l2_cache__event_del,
> +		.start		= l2_cache__event_start,
> +		.stop		= l2_cache__event_stop,
> +		.read		= l2_cache__event_read,
> +		.attr_groups	= l2_cache_pmu_attr_grps,
> +	};
> +
> +	l2cache_pmu.num_counters = get_num_counters();
> +	l2_cycle_ctr_idx = l2cache_pmu.num_counters - 1;
> +	l2_reset_mask = ((1 << (l2cache_pmu.num_counters - 1)) - 1) |
> +		L2PM_CC_ENABLE;
> +
> +	err = device_property_read_u32(&pdev->dev, "qcom,num-cpus-in-cluster",
> +				       &num_cpus_in_cluster);

This really is not a property of the L2. It's a larger topological
detail.

Describe the CPU affinity explicitly rather than trying to hackishly
build it up from myriad sources.

> +	if (err) {
> +		dev_err(&pdev->dev, "Failed to read number of cpus in cluster\n");
> +		return err;
> +	}
> +	mpidr_affl1_shift = hweight8(num_cpus_in_cluster - 1);
> +
> +	/* Read slice info and initialize each slice */
> +	result = device_for_each_child(&pdev->dev, &l2cache_pmu,
> +				       l2_cache_pmu_probe_slice);
> +
> +	if (result < 0)
> +		return -ENODEV;
> +
> +	if (l2cache_pmu.num_pmus == 0) {
> +		dev_err(&pdev->dev, "No hardware L2 PMUs found\n");
> +		return -ENODEV;
> +	}
> +
> +	result = perf_pmu_register(&l2cache_pmu.pmu,
> +				   l2cache_pmu.pmu.name, -1);
> +

May you have multiple sockets? You propbably want an instance ID on the
PMU name.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] soc: qcom: add l2 cache perf events driver
  2016-06-06  9:51   ` Mark Rutland
@ 2016-06-08 15:16     ` Neil Leeder
  2016-06-09 15:56       ` Mark Rutland
  0 siblings, 1 reply; 13+ messages in thread
From: Neil Leeder @ 2016-06-08 15:16 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Catalin Marinas, WillDeacon, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, linux-arm-msm, linux-kernel,
	linux-arm-kernel, Mark Langsdorf, Mark Salter, Jon Masters,
	Timur Tabi, cov, nleeder

Mark,
Thank you for the detailed review.

On 6/6/2016 05:51 AM, Mark Rutland wrote:
> On Fri, Jun 03, 2016 at 05:03:32PM -0400, Neil Leeder wrote:
>> Adds perf events support for L2 cache PMU.
>>
>> The L2 cache PMU driver is named 'l2cache' and can be used
>> with perf events to profile L2 events such as cache hits
>> and misses.
>>
>> Signed-off-by: Neil Leeder <nleeder@codeaurora.org>
>> ---
>>  drivers/soc/qcom/Kconfig               |  10 +
>>  drivers/soc/qcom/Makefile              |   1 +
>>  drivers/soc/qcom/perf_event_l2.c       | 917 +++++++++++++++++++++++++++++++++
>>  include/linux/soc/qcom/perf_event_l2.h |  82 +++
>>  4 files changed, 1010 insertions(+)
>>  create mode 100644 drivers/soc/qcom/perf_event_l2.c
>>  create mode 100644 include/linux/soc/qcom/perf_event_l2.h
>>

[...]

>> +++ b/drivers/soc/qcom/perf_event_l2.c
>> @@ -0,0 +1,917 @@
>> +/* Copyright (c) 2015,2016 The Linux Foundation. All rights reserved.
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 and
>> + * only version 2 as published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + * GNU General Public License for more details.
>> + */
>> +#define pr_fmt(fmt) "l2 perfevents: " fmt
>> +
>> +#include <linux/module.h>
>> +#include <linux/bitops.h>
>> +#include <linux/interrupt.h>
>> +#include <linux/io.h>
>> +#include <linux/irq.h>
>> +#include <linux/list.h>
>> +#include <linux/acpi.h>
>> +#include <linux/perf_event.h>
>> +#include <linux/platform_device.h>
>> +#include <linux/soc/qcom/perf_event_l2.h>
>> +#include <linux/soc/qcom/l2-accessors.h>
>> +
>> +/*
>> + * The cache is made-up of one or more slices, each slice has its own PMU.
>> + * This structure represents one of the hardware PMUs.
>> + */
> 
> I take it each slice PMU is shared by several CPUs? i.e. there aren't
> per-cpu slice PMU counters.
> 

That is correct.

>> +struct hml2_pmu {
>> +	struct list_head entry;
>> +	struct perf_event *events[MAX_L2_CTRS];
>> +	unsigned long used_mask[BITS_TO_LONGS(MAX_L2_EVENTS)];
> 
> What's the difference between MAX_L2_CTRS and MAX_L2_EVENTS?
> 
> I'm surprised that they are different. What precisely do either
> represent?
> 
> Surely you don't have different events per-slice? Why do you need the
> PMU pointers at the slice level?
> 

Qualcomm PMUs have events arranged in a matrix of rows (codes) and columns (groups).
Only one event can be enabled from each group at once.
The upper part of used_mask is used to keep a record of which group has been
used. This is the same mechanism used in armv7
(arch/arm/perf_event_v7.c:krait_event_to_bit()).
So used_mask contains both an indication for a physical counter in use, and also
for the group, which is why it's a different size from MAX_L2_CTRS.

I kept this because it's what's done in armv7. If there's an objection, I can
move the group used_mask to its own bitmap.

>> +	unsigned int valid_cpus;
>> +	int on_cpu;
>> +	u8 cpu[MAX_CPUS_IN_CLUSTER];
> 
> These all look suspicious to me (potentially barring on_cpu)
> 
> Surely this is an uncore PMU? It represents a shared resource, with
> shared counters, so it should be.
> 
> If you need to encode a set of CPUs, use a cpumask.
> 

Agreed. I will use a cpumask.

>> +	atomic64_t prev_count[MAX_L2_CTRS];
>> +	spinlock_t pmu_lock;
>> +};
>> +
>> +/*
>> + * Aggregate PMU. Implements the core pmu functions and manages
>> + * the hardware PMUs.
>> + */
>> +struct l2cache_pmu {
>> +	u32 num_pmus;
>> +	struct list_head pmus;
>> +	struct pmu pmu;
>> +	int num_counters;
>> +};
>> +
>> +#define to_l2cache_pmu(p) (container_of(p, struct l2cache_pmu, pmu))
>> +
>> +static DEFINE_PER_CPU(struct hml2_pmu *, cpu_to_pmu);
>> +static struct l2cache_pmu l2cache_pmu = { 0 };
>> +static int num_cpus_in_cluster;
>> +
>> +static u32 l2_cycle_ctr_idx;
>> +static u32 l2_reset_mask;
>> +static u32 mpidr_affl1_shift;
> 
> Eww. Drivers really shouldn't be messing with the MPIDR. The precise
> values are bound to change between generations of SoCs leaving us with a
> mess.
> 
> The FW should tell us precisely which CPUs device are affine to.
> 

During partial goods processing firmware renumbers the CPUs.
The only association between the CPU numbers the kernel sees and the
physical CPUs & slices is through MPIDR. But see a comment below
where I think I can remove some of the MPIDR handling, including
this variable.

>> +static inline u32 idx_to_reg(u32 idx)
>> +{
>> +	u32 bit;
>> +
>> +	if (idx == l2_cycle_ctr_idx)
>> +		bit = BIT(L2CYCLE_CTR_BIT);
>> +	else
>> +		bit = BIT(idx);
>> +	return bit;
>> +}
> 
> Probably worth giving a _bit suffix on this function. That makes it less
> confusing when it's used later.
> 

OK.

> [...]
> 
>> +static inline void hml2_pmu__enable(void)
>> +{
>> +	/* Ensure all programming commands are done before proceeding */
>> +	wmb();
>> +	set_l2_indirect_reg(L2PMCR, L2PMCR_GLOBAL_ENABLE);
>> +}
>> +
>> +static inline void hml2_pmu__disable(void)
>> +{
>> +	set_l2_indirect_reg(L2PMCR, L2PMCR_GLOBAL_DISABLE);
>> +	/* Ensure the basic counter unit is stopped before proceeding */
>> +	wmb();
>> +}
> 
> What does set_l2_indirect_reg do? IIRC it does system register accesses,
> so I don't see why wmb() (A.K.A dsb(st)) would ensure completion of
> those. So what's going on in hml2_pmu__disable()?
> 
> What exactly are you trying to order here?
> 

I think this is left over from a previous version - I'll remove it.

>> +static inline void hml2_pmu__counter_set_value(u32 idx, u64 value)
>> +{
>> +	u32 counter_reg;
>> +
>> +	if (idx == l2_cycle_ctr_idx) {
>> +		set_l2_indirect_reg(L2PMCCNTR, value);
>> +	} else {
>> +		counter_reg = (idx * 16) + IA_L2PMXEVCNTR_BASE;
>> +		set_l2_indirect_reg(counter_reg, (u32)(value & 0xFFFFFFFF));
>> +	}
>> +}
>> +
>> +static inline u64 hml2_pmu__counter_get_value(u32 idx)
>> +{
>> +	u64 value;
>> +	u32 counter_reg;
>> +
>> +	if (idx == l2_cycle_ctr_idx) {
>> +		value = get_l2_indirect_reg(L2PMCCNTR);
>> +	} else {
>> +		counter_reg = (idx * 16) + IA_L2PMXEVCNTR_BASE;
>> +		value = get_l2_indirect_reg(counter_reg);
>> +	}
>> +
>> +	return value;
>> +}
> 
> It would be good to explain the magic 16 for these two (ideally using
> some well-named mnemonic).
> 

Will do.

> [...]
> 
>> +static inline int event_to_bit(unsigned int group)
>> +{
>> +	/* Lower bits are reserved for use by the counters */
>> +	return group + MAX_L2_CTRS + 1;
>> +}
> 
> What exactly is a group in this context? Why is this not group_to_bit?
> 

This is the column(group) part of the event format. If group used_mask gets
separated out from the existing used_mask, this will go away,

>> +static int l2_cache__get_event_idx(struct hml2_pmu *slice,
>> +				   struct perf_event *event)
>> +{
>> +	struct hw_perf_event *hwc = &event->hw;
>> +	int idx;
>> +	int bit;
>> +
>> +	if (hwc->config_base == L2CYCLE_CTR_RAW_CODE) {
>> +		if (test_and_set_bit(l2_cycle_ctr_idx, slice->used_mask))
>> +			return -EAGAIN;
>> +
>> +		return l2_cycle_ctr_idx;
>> +	}
>> +
>> +	if (L2_EVT_GROUP(hwc->config_base) > L2_EVT_GROUP_MAX)
>> +		return -EINVAL;
> 
> This doesn't look right. L2_EVT_GROUP() masks out and shifts out bits,
> so I can pass an invalid value and it will be silently coerced to some
> valid value.
> 

I will add event validation in __event_init(). Then this check won't be needed here.

>> +
>> +	/* check for column exclusion */
>> +	bit = event_to_bit(L2_EVT_GROUP(hwc->config_base));
>> +	if (test_and_set_bit(bit, slice->used_mask)) {
>> +		pr_err("column exclusion for evt %lx\n", hwc->config_base);
>> +		event->state = PERF_EVENT_STATE_OFF;
>> +		return -EINVAL;
>> +	}
>> +
>> +	for (idx = 0; idx < l2cache_pmu.num_counters - 1; idx++) {
>> +		if (!test_and_set_bit(idx, slice->used_mask))
>> +			return idx;
>> +	}
>> +
>> +	/* The counters are all in use. */
>> +	clear_bit(bit, slice->used_mask);
>> +	return -EAGAIN;
>> +}
> 
> [...]
> 
>> +static irqreturn_t l2_cache__handle_irq(int irq_num, void *data)
>> +{
>> +	struct hml2_pmu *slice = data;
>> +	u32 ovsr;
>> +	int idx;
>> +	struct pt_regs *regs;
>> +
>> +	ovsr = hml2_pmu__getreset_ovsr();
>> +	if (!hml2_pmu__has_overflowed(ovsr))
>> +		return IRQ_NONE;
>> +
>> +	regs = get_irq_regs();
>> +
>> +	for (idx = 0; idx < l2cache_pmu.num_counters; idx++) {
>> +		struct perf_event *event = slice->events[idx];
>> +		struct hw_perf_event *hwc;
>> +		struct perf_sample_data data;
>> +
>> +		if (!event)
>> +			continue;
>> +
>> +		if (!hml2_pmu__counter_has_overflowed(ovsr, idx))
>> +			continue;
>> +
>> +		l2_cache__event_update_from_slice(event, slice);
>> +		hwc = &event->hw;
>> +
>> +		if (is_sampling_event(event)) {
>> +			perf_sample_data_init(&data, 0, hwc->last_period);
> 
> I don't think sampling makes sense, given this is an uncore PMU and the
> events are triggered by other CPUs.

There is origin filtering so events can be attributed to a CPU when sampling.

> 
>> +			if (!l2_cache__event_set_period(event, hwc))
>> +				continue;
>> +			if (perf_event_overflow(event, &data, regs))
>> +				l2_cache__event_disable(event);
>> +		} else {
>> +			l2_cache__slice_set_period(slice, hwc);
>> +		}
>> +	}
>> +
>> +	/*
>> +	 * Handle the pending perf events.
>> +	 *
>> +	 * Note: this call *must* be run with interrupts disabled. For
>> +	 * platforms that can have the PMU interrupts raised as an NMI, this
>> +	 * will not work.
>> +	 */
>> +	irq_work_run();
>> +
>> +	return IRQ_HANDLED;
>> +}
> 
> [...]
> 
>> +static int l2_cache__event_init(struct perf_event *event)
>> +{
>> +	struct hw_perf_event *hwc = &event->hw;
>> +	struct hml2_pmu *slice;
>> +
>> +	if (event->attr.type != l2cache_pmu.pmu.type)
>> +		return -ENOENT;
>> +
>> +	/* We cannot filter accurately so we just don't allow it. */
>> +	if (event->attr.exclude_user || event->attr.exclude_kernel ||
>> +			event->attr.exclude_hv || event->attr.exclude_idle)
>> +		return -EINVAL;
>> +
> 
> Please also check grouping. For instance, you definitely don't support
> event->pmu != event->group_leader->pmu, modulo so weirdness with
> software events. See drivers/bus/arm-ccn.c.
> 
> Do you support groups of events at all?
> 
> If so, you must validate that the groups are also schedulable.
> 

I'll add grouping checks

>> +	hwc->idx = -1;
>> +	hwc->config_base = event->attr.config;
>> +
>> +	/*
>> +	 * Ensure all events are on the same cpu so all events are in the
>> +	 * same cpu context, to avoid races on pmu_enable etc.
>> +	 * For the first event, record its CPU in slice->on_cpu.
>> +	 * For subsequent events on that slice, force the event to that cpu.
>> +	 */
>> +	slice = get_hml2_pmu(to_l2cache_pmu(event->pmu), event->cpu);
>> +	if (slice->on_cpu == -1)
>> +		slice->on_cpu = event->cpu;
>> +	else
>> +		event->cpu = slice->on_cpu;
> 
> Please just do the usual thing for uncore PMU CPU handling. Take a look
> at the arm-ccn driver.
> 

I'll look at arm-ccn's handling


>> +	/*
>> +	 * For counting events use L2_CNT_PERIOD which allows for simplified
>> +	 * math and proper handling of overflows.
>> +	 */
>> +	if (hwc->sample_period == 0) {
>> +		hwc->sample_period = L2_CNT_PERIOD;
>> +		hwc->last_period   = hwc->sample_period;
>> +		local64_set(&hwc->period_left, hwc->sample_period);
>> +	}
>> +
>> +	return 0;
>> +}
> 
> Huh? You haven't validated the event. Please validate the config is a
> real, countable event that you support before assuming success.
> 

I'll add validation here

>> +static void l2_cache__event_update(struct perf_event *event)
>> +{
>> +	struct l2cache_pmu *system = to_l2cache_pmu(event->pmu);
>> +	struct hml2_pmu *slice;
>> +	struct hw_perf_event *hwc = &event->hw;
>> +
>> +	if (hwc->idx == -1)
>> +		return;
>> +
>> +	slice = get_hml2_pmu(system, event->cpu);
>> +	if (likely(slice))
>> +		l2_cache__event_update_from_slice(event, slice);
>> +}
> 
> When is slice NULL?
> 

I'll remove this

> [...]
> 
>> +static int l2_cache__event_add(struct perf_event *event, int flags)
>> +{
>> +	struct l2cache_pmu *system = to_l2cache_pmu(event->pmu);
>> +	struct hw_perf_event *hwc = &event->hw;
>> +	int idx;
>> +	int err = 0;
>> +	struct hml2_pmu *slice;
>> +
>> +	slice = get_hml2_pmu(system, event->cpu);
>> +	if (!slice) {
>> +		event->state = PERF_EVENT_STATE_OFF;
>> +		hwc->idx = -1;
>> +		goto out;
>> +	}
>> +
>> +	/*
>> +	 * This checks for a duplicate event on the same cluster, which
>> +	 * typically occurs in non-sampling mode when using perf -a,
>> +	 * which generates events on each CPU. In this case, we don't
>> +	 * want to permanently disable the event by setting its state to
>> +	 * OFF, because if the other CPU is subsequently hotplugged, etc,
>> +	 * we want the opportunity to start collecting on this event.
>> +	 */
>> +	if (config_is_dup(slice, hwc)) {
>> +		hwc->idx = -1;
>> +		goto out;
>> +	}
> 
> Eww. This indicates you're just hacking around userspace.
> 
> Make this a uncore PMU, and expose a cpumask. See the arm-ccn driver.
> 

I'll add cpumask

>> +
>> +	idx = l2_cache__get_event_idx(slice, event);
>> +	if (idx < 0) {
>> +		err = idx;
>> +		goto out;
>> +	}
>> +
>> +	hwc->idx = idx;
>> +	hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE;
>> +	slice->events[idx] = event;
>> +	atomic64_set(&slice->prev_count[idx], 0ULL);
>> +
>> +	if (flags & PERF_EF_START)
>> +		l2_cache__event_start(event, flags);
>> +
>> +	/* Propagate changes to the userspace mapping. */
>> +	perf_event_update_userpage(event);
>> +
>> +out:
>> +	return err;
>> +}
> 
> [...]
> 
>> +static inline int mpidr_to_cpu(u32 mpidr)
>> +{
>> +	return MPIDR_AFFINITY_LEVEL(mpidr, 0) |
>> +		(MPIDR_AFFINITY_LEVEL(mpidr, 1) << mpidr_affl1_shift);
>> +}
> 
> I don't follow the logic here.
> 
> What exactly are you trying to get? What space does the result exist in?
> It's neither the Linux logical ID nor the physical ID.
> 

It's the physical CPU number, constructed out of the MPIDR.AFFL1 (cluster number)
and AFFL0 (CPU number within the cluster).

The goal is to get a list of logical CPUs associated with each slice. As mentioned
above, because of partial goods processing the only way to know this is to involve
the physical id of each CPU from cpu_logical_map().

An alternative way to process this would be to find each logical CPU where its
cluster (MPIDR_AFFINITY_LEVEL( ,1)) matches the slice number from firmware.
This would remove the need for AFFL0, affl1_shift and the num_cpus_in_cluster
property, at the expense of searching every logical cpu per slice.
And that may be a better approach?

>> +
>> +static int get_logical_cpu_index(int phys_cpu)
>> +{
>> +	int cpu;
>> +
>> +	for (cpu = 0; cpu < nr_cpu_ids; cpu++)
>> +		if (mpidr_to_cpu(cpu_logical_map(cpu)) == phys_cpu)
>> +			return cpu;
>> +	return -EINVAL;
>> +}
> 
> As above, I really don't follow this.
> 
> What exactly is phys_cpu?
> 
>> +static int l2_cache_pmu_probe_slice(struct device *dev, void *data)
>> +{
>> +	struct platform_device *pdev = to_platform_device(dev->parent);
>> +	struct platform_device *sdev = to_platform_device(dev);
>> +	struct l2cache_pmu *l2cache_pmu = data;
>> +	struct hml2_pmu *slice;
>> +	struct cpumask affinity_mask;
>> +	struct acpi_device *device;
>> +	int irq, err;
>> +	int phys_cpu, logical_cpu;
>> +	int i;
>> +	unsigned long instance;
>> +
>> +	if (acpi_bus_get_device(ACPI_HANDLE(dev), &device))
>> +		return -ENODEV;
>> +
>> +	if (kstrtol(device->pnp.unique_id, 10, &instance) < 0) {
>> +		pr_err("unable to read ACPI uid\n");
>> +		return -ENODEV;
>> +	}
>> +
>> +	irq = platform_get_irq(sdev, 0);
>> +	if (irq < 0) {
>> +		dev_err(&pdev->dev,
>> +			"Failed to get valid irq for slice %ld\n", instance);
>> +		return irq;
>> +	}
>> +
>> +	slice = devm_kzalloc(&pdev->dev, sizeof(*slice), GFP_KERNEL);
>> +	if (!slice)
>> +		return -ENOMEM;
>> +
>> +	cpumask_clear(&affinity_mask);
>> +	slice->on_cpu = -1;
>> +
>> +	for (i = 0; i < num_cpus_in_cluster; i++) {
>> +		phys_cpu = instance * num_cpus_in_cluster + i;
>> +		logical_cpu = get_logical_cpu_index(phys_cpu);
>> +		if (logical_cpu >= 0) {
>> +			slice->cpu[slice->valid_cpus++] = logical_cpu;
>> +			per_cpu(cpu_to_pmu, logical_cpu) = slice;
>> +			cpumask_set_cpu(logical_cpu, &affinity_mask);
>> +		}
>> +	}
> 
> Please, get a better, explicit, description of CPU affinity, rather than
> depending on a fragile unique ID and an arbitrary MPIDR decomposition
> scheme.
> 
>> +
>> +	if (slice->valid_cpus == 0) {
>> +		dev_err(&pdev->dev, "No CPUs found for L2 cache instance %ld\n",
>> +			instance);
>> +		return -ENODEV;
>> +	}
>> +
>> +	if (irq_set_affinity(irq, &affinity_mask)) {
>> +		dev_err(&pdev->dev,
>> +			"Unable to set irq affinity (irq=%d, first cpu=%d)\n",
>> +			irq, slice->cpu[0]);
>> +		return -ENODEV;
>> +	}
> 
> I didn't spot any CPU notifier. Consider hotplug and the automigration
> of IRQs.
> 

I'll add the notifier

>> +
>> +	err = devm_request_irq(
>> +		&pdev->dev, irq, l2_cache__handle_irq,
>> +		IRQF_NOBALANCING, "l2-cache-pmu", slice);
>> +	if (err) {
>> +		dev_err(&pdev->dev,
>> +			"Unable to request IRQ%d for L2 PMU counters\n",
>> +			irq);
>> +		return err;
>> +	}
>> +
>> +	dev_info(&pdev->dev, "Registered L2 cache PMU instance %ld with %d CPUs\n",
>> +		 instance, slice->valid_cpus);
>> +
>> +	slice->pmu_lock = __SPIN_LOCK_UNLOCKED(slice->pmu_lock);
>> +
>> +	hml2_pmu__init(slice);
>> +	list_add(&slice->entry, &l2cache_pmu->pmus);
>> +	l2cache_pmu->num_pmus++;
>> +
>> +	return 0;
>> +}
>> +
>> +static int l2_cache_pmu_probe(struct platform_device *pdev)
>> +{
>> +	int result;
>> +	int err;
>> +
>> +	INIT_LIST_HEAD(&l2cache_pmu.pmus);
>> +
>> +	l2cache_pmu.pmu = (struct pmu) {
>> +		.name		= "l2cache",
>> +		.pmu_enable	= l2_cache__pmu_enable,
>> +		.pmu_disable	= l2_cache__pmu_disable,
>> +		.event_init	= l2_cache__event_init,
>> +		.add		= l2_cache__event_add,
>> +		.del		= l2_cache__event_del,
>> +		.start		= l2_cache__event_start,
>> +		.stop		= l2_cache__event_stop,
>> +		.read		= l2_cache__event_read,
>> +		.attr_groups	= l2_cache_pmu_attr_grps,
>> +	};
>> +
>> +	l2cache_pmu.num_counters = get_num_counters();
>> +	l2_cycle_ctr_idx = l2cache_pmu.num_counters - 1;
>> +	l2_reset_mask = ((1 << (l2cache_pmu.num_counters - 1)) - 1) |
>> +		L2PM_CC_ENABLE;
>> +
>> +	err = device_property_read_u32(&pdev->dev, "qcom,num-cpus-in-cluster",
>> +				       &num_cpus_in_cluster);
> 
> This really is not a property of the L2. It's a larger topological
> detail.
> 
> Describe the CPU affinity explicitly rather than trying to hackishly
> build it up from myriad sources.
> 
>> +	if (err) {
>> +		dev_err(&pdev->dev, "Failed to read number of cpus in cluster\n");
>> +		return err;
>> +	}
>> +	mpidr_affl1_shift = hweight8(num_cpus_in_cluster - 1);
>> +
>> +	/* Read slice info and initialize each slice */
>> +	result = device_for_each_child(&pdev->dev, &l2cache_pmu,
>> +				       l2_cache_pmu_probe_slice);
>> +
>> +	if (result < 0)
>> +		return -ENODEV;
>> +
>> +	if (l2cache_pmu.num_pmus == 0) {
>> +		dev_err(&pdev->dev, "No hardware L2 PMUs found\n");
>> +		return -ENODEV;
>> +	}
>> +
>> +	result = perf_pmu_register(&l2cache_pmu.pmu,
>> +				   l2cache_pmu.pmu.name, -1);
>> +
> 
> May you have multiple sockets? You propbably want an instance ID on the
> PMU name.
> 

This is just a single socket implementation.

> Thanks,
> Mark.
> 

Neil

-- 
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/2] qcom: add l2 cache perf events driver
  2016-06-06  9:04 ` [PATCH 0/2] " Mark Rutland
@ 2016-06-08 15:21   ` Neil Leeder
  2016-06-08 16:12     ` Mark Rutland
  0 siblings, 1 reply; 13+ messages in thread
From: Neil Leeder @ 2016-06-08 15:21 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Catalin Marinas, WillDeacon, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, linux-arm-msm, linux-kernel,
	linux-arm-kernel, Mark Langsdorf, Mark Salter, Jon Masters,
	Timur Tabi, cov, nleeder



On 6/6/2016 05:04 AM, Mark Rutland wrote:
> On Fri, Jun 03, 2016 at 05:03:30PM -0400, Neil Leeder wrote:
>> This adds a new dynamic PMU to the Perf Events framework to program
>> and control the L2 cache PMUs in some Qualcomm Technologies SOCs.
>>
>> The driver exports formatting and event information to sysfs so it can
>> be used by the perf user space tools with the syntax:
>> perf stat -e l2cache/event=0x42/
>>
>> One point to note is that there are certain combinations of events
>> which are invalid, and which are detected in event_add().
> 
> Which combinations of events are invalid?
> 
> Please elaborate.
> 
>> Simply having event_add() fail would result in event_sched_in() making
>> it Inactive, treating it as over-allocation of counters, leading to
>> repeated attempts to allocate the events and ending up with a
>> statistical count.  A solution for this situation is to turn the
>> conflicting event off in event_add(). This allows a single error
>> message to be generated, and no recurring attempts to re-add the
>> invalid event. In order for this to work, event_sched_in()
>> needs to detect that event_add() changed the state, and not override it
>> and force it to Inactive.
> 
> For heterogeneous PMUs, we added the pmu::filter_match(event) callback
> for a similar purpose: preventing an event from being scheduled on a
> core which does not support that event, while allowing other events to
> be scheduled.
> 
> So if you truly need to filter events, the infrastructure for doing so
> already exists.
> 
> However, you will need to elaborate on "there are certain combinations
> of events which are invalid".
> 

Qualcomm PMUs have events arranged in a matrix of rows and columns.
Only one event can be enabled from each column at once. So this isn't a
heterogeneous CPU issue, and it doesn't seem to fit into filter_match()
because it is not an absolute restriction that this event can't be
enabled on this cpu, it's related to the other events which have 
already been enabled.

>> This patchset requires:
>> [PATCH] soc: qcom: provide mechanism for drivers to access L2 registers
> 
> A link would be remarkably helpful.

http://archive.arm.linux.org.uk/lurker/message/20160603.205900.1970f20d.en.html

> 
> Better would be to fold that patch into this series, as it's the only
> user, and both are helpful review context for the other.
> 

The L2 PMU driver is the first user of the L2-accessors patch
but it won't be the only one, which is why I kept it separate.

> Thanks,
> Mark.
> 

Neil

-- 
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/2] qcom: add l2 cache perf events driver
  2016-06-08 15:21   ` Neil Leeder
@ 2016-06-08 16:12     ` Mark Rutland
  2016-06-08 19:29       ` Neil Leeder
  0 siblings, 1 reply; 13+ messages in thread
From: Mark Rutland @ 2016-06-08 16:12 UTC (permalink / raw)
  To: Neil Leeder
  Cc: Catalin Marinas, WillDeacon, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, linux-arm-msm, linux-kernel,
	linux-arm-kernel, Mark Langsdorf, Mark Salter, Jon Masters,
	Timur Tabi, cov

On Wed, Jun 08, 2016 at 11:21:16AM -0400, Neil Leeder wrote:
> 
> 
> On 6/6/2016 05:04 AM, Mark Rutland wrote:
> > On Fri, Jun 03, 2016 at 05:03:30PM -0400, Neil Leeder wrote:
> >> This adds a new dynamic PMU to the Perf Events framework to program
> >> and control the L2 cache PMUs in some Qualcomm Technologies SOCs.
> >>
> >> The driver exports formatting and event information to sysfs so it can
> >> be used by the perf user space tools with the syntax:
> >> perf stat -e l2cache/event=0x42/
> >>
> >> One point to note is that there are certain combinations of events
> >> which are invalid, and which are detected in event_add().
> > 
> > Which combinations of events are invalid?
> > 
> > Please elaborate.
> > 
> >> Simply having event_add() fail would result in event_sched_in() making
> >> it Inactive, treating it as over-allocation of counters, leading to
> >> repeated attempts to allocate the events and ending up with a
> >> statistical count.  A solution for this situation is to turn the
> >> conflicting event off in event_add(). This allows a single error
> >> message to be generated, and no recurring attempts to re-add the
> >> invalid event. In order for this to work, event_sched_in()
> >> needs to detect that event_add() changed the state, and not override it
> >> and force it to Inactive.
> > 
> > For heterogeneous PMUs, we added the pmu::filter_match(event) callback
> > for a similar purpose: preventing an event from being scheduled on a
> > core which does not support that event, while allowing other events to
> > be scheduled.
> > 
> > So if you truly need to filter events, the infrastructure for doing so
> > already exists.
> > 
> > However, you will need to elaborate on "there are certain combinations
> > of events which are invalid".
> > 
> 
> Qualcomm PMUs have events arranged in a matrix of rows and columns.
> Only one event can be enabled from each column at once. So this isn't a
> heterogeneous CPU issue, and it doesn't seem to fit into filter_match()
> because it is not an absolute restriction that this event can't be
> enabled on this cpu, it's related to the other events which have 
> already been enabled.

The above is useful context. Please add (something like) it to the cover
and relevant patches in future postings!

Ok. So if I understand correctly, each counter can only count certain
events (and therefore each event can only go into some counters), rather
than all counters being identical?

So the issue is that there is no _suitable_ counter available for an
event, but there are still counters available for events in general.

This case is somewhat different to the heterogeneous PMU case.

Unfortunately, trying to filter events in this manner can be very
expensive, and allows a malicious user to DoS the system, as Peter
pointed out when I tried to do similar things in this area. Take a look
at [1] and associated replies.

If you can test the availability of a relevant counter very cheaply,
then having a specific return code for the case of no relevant counter
may be more palatable.

> >> This patchset requires:
> >> [PATCH] soc: qcom: provide mechanism for drivers to access L2 registers
> > 
> > A link would be remarkably helpful.
> 
> http://archive.arm.linux.org.uk/lurker/message/20160603.205900.1970f20d.en.html
> 
> > 
> > Better would be to fold that patch into this series, as it's the only
> > user, and both are helpful review context for the other.
> > 
> 
> The L2 PMU driver is the first user of the L2-accessors patch
> but it won't be the only one, which is why I kept it separate.

If other users aren't going to appear in the same merge window, IMO it
would be better to place them in the same series for now. Otherwise,
please have a link in the cover in future postings.

Thanks,
Mark.

[1] http://lkml.kernel.org/r/1392054264-23570-5-git-send-email-mark.rutland@arm.com

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/2] qcom: add l2 cache perf events driver
  2016-06-08 16:12     ` Mark Rutland
@ 2016-06-08 19:29       ` Neil Leeder
  0 siblings, 0 replies; 13+ messages in thread
From: Neil Leeder @ 2016-06-08 19:29 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Catalin Marinas, WillDeacon, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, linux-arm-msm, linux-kernel,
	linux-arm-kernel, Mark Langsdorf, Mark Salter, Jon Masters,
	Timur Tabi, cov, nleeder



On 6/8/2016 12:12 PM, Mark Rutland wrote:
> On Wed, Jun 08, 2016 at 11:21:16AM -0400, Neil Leeder wrote:
>>
>>
>> On 6/6/2016 05:04 AM, Mark Rutland wrote:
>>> On Fri, Jun 03, 2016 at 05:03:30PM -0400, Neil Leeder wrote:
>>>> This adds a new dynamic PMU to the Perf Events framework to program
>>>> and control the L2 cache PMUs in some Qualcomm Technologies SOCs.
>>>>
>>>> The driver exports formatting and event information to sysfs so it can
>>>> be used by the perf user space tools with the syntax:
>>>> perf stat -e l2cache/event=0x42/
>>>>
>>>> One point to note is that there are certain combinations of events
>>>> which are invalid, and which are detected in event_add().
>>>
>>> Which combinations of events are invalid?
>>>
>>> Please elaborate.
>>>
>>>> Simply having event_add() fail would result in event_sched_in() making
>>>> it Inactive, treating it as over-allocation of counters, leading to
>>>> repeated attempts to allocate the events and ending up with a
>>>> statistical count.  A solution for this situation is to turn the
>>>> conflicting event off in event_add(). This allows a single error
>>>> message to be generated, and no recurring attempts to re-add the
>>>> invalid event. In order for this to work, event_sched_in()
>>>> needs to detect that event_add() changed the state, and not override it
>>>> and force it to Inactive.
>>>
>>> For heterogeneous PMUs, we added the pmu::filter_match(event) callback
>>> for a similar purpose: preventing an event from being scheduled on a
>>> core which does not support that event, while allowing other events to
>>> be scheduled.
>>>
>>> So if you truly need to filter events, the infrastructure for doing so
>>> already exists.
>>>
>>> However, you will need to elaborate on "there are certain combinations
>>> of events which are invalid".
>>>
>>
>> Qualcomm PMUs have events arranged in a matrix of rows and columns.
>> Only one event can be enabled from each column at once. So this isn't a
>> heterogeneous CPU issue, and it doesn't seem to fit into filter_match()
>> because it is not an absolute restriction that this event can't be
>> enabled on this cpu, it's related to the other events which have 
>> already been enabled.
> 
> The above is useful context. Please add (something like) it to the cover
> and relevant patches in future postings!
> 
> Ok. So if I understand correctly, each counter can only count certain
> events (and therefore each event can only go into some counters), rather
> than all counters being identical?
> 
> So the issue is that there is no _suitable_ counter available for an
> event, but there are still counters available for events in general.
> 
> This case is somewhat different to the heterogeneous PMU case.
> 
> Unfortunately, trying to filter events in this manner can be very
> expensive, and allows a malicious user to DoS the system, as Peter
> pointed out when I tried to do similar things in this area. Take a look
> at [1] and associated replies.
> 
> If you can test the availability of a relevant counter very cheaply,
> then having a specific return code for the case of no relevant counter
> may be more palatable.
> 

Not quite. Any event can go into any counter, but once an event from a given
column has been assigned to a counter, no other events from the same column
can be placed in any other counter.

Here I detect this condition on the first call to pmu->add() for
the conflicting event, and turn that event's state to Off.
That should ensure there are no more attempts to schedule it, which should avoid
DoS concerns.

But I may see if filter_match() could be used here anyway. Instead of having a
static list of valid PMUs, look at the list of already enabled events for this PMU
and fail if the conflict is detected. I think this would remove the need for a
change in state if add() is never called for the event.

>>>> This patchset requires:
>>>> [PATCH] soc: qcom: provide mechanism for drivers to access L2 registers
>>>
>>> A link would be remarkably helpful.
>>
>> http://archive.arm.linux.org.uk/lurker/message/20160603.205900.1970f20d.en.html
>>
>>>
>>> Better would be to fold that patch into this series, as it's the only
>>> user, and both are helpful review context for the other.
>>>
>>
>> The L2 PMU driver is the first user of the L2-accessors patch
>> but it won't be the only one, which is why I kept it separate.
> 
> If other users aren't going to appear in the same merge window, IMO it
> would be better to place them in the same series for now. Otherwise,
> please have a link in the cover in future postings.

Ok, makes sense.

> Thanks,
> Mark.
> 
> [1] http://lkml.kernel.org/r/1392054264-23570-5-git-send-email-mark.rutland@arm.com
> 

Neil

-- 
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] soc: qcom: add l2 cache perf events driver
  2016-06-08 15:16     ` Neil Leeder
@ 2016-06-09 15:56       ` Mark Rutland
  2016-06-09 19:41         ` Peter Zijlstra
  0 siblings, 1 reply; 13+ messages in thread
From: Mark Rutland @ 2016-06-09 15:56 UTC (permalink / raw)
  To: Neil Leeder
  Cc: Catalin Marinas, WillDeacon, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, linux-arm-msm, linux-kernel,
	linux-arm-kernel, Mark Langsdorf, Mark Salter, Jon Masters,
	Timur Tabi, cov

On Wed, Jun 08, 2016 at 11:16:06AM -0400, Neil Leeder wrote:
> On 6/6/2016 05:51 AM, Mark Rutland wrote:
> > On Fri, Jun 03, 2016 at 05:03:32PM -0400, Neil Leeder wrote:
> >> +/*
> >> + * The cache is made-up of one or more slices, each slice has its own PMU.
> >> + * This structure represents one of the hardware PMUs.
> >> + */
> > 
> > I take it each slice PMU is shared by several CPUs? i.e. there aren't
> > per-cpu slice PMU counters.
> > 
> 
> That is correct.

Ok, so this is certainly an uncore PMU.

That impacts a few things (e.g. task_ctx_nr, sampling), which I've tried
to elaborate on below.

> >> +struct hml2_pmu {
> >> +	struct list_head entry;
> >> +	struct perf_event *events[MAX_L2_CTRS];
> >> +	unsigned long used_mask[BITS_TO_LONGS(MAX_L2_EVENTS)];
> > 
> > What's the difference between MAX_L2_CTRS and MAX_L2_EVENTS?
> > 
> > I'm surprised that they are different. What precisely do either
> > represent?
> > 
> > Surely you don't have different events per-slice? Why do you need the
> > PMU pointers at the slice level?
> 
> Qualcomm PMUs have events arranged in a matrix of rows (codes) and columns (groups).
> Only one event can be enabled from each group at once.
> The upper part of used_mask is used to keep a record of which group has been
> used. This is the same mechanism used in armv7
> (arch/arm/perf_event_v7.c:krait_event_to_bit()).

Ah, I hadn't realised that was the case.

That's not how used_mask was originally intended to be used (it was
simply meant to cover which counters were in use), and I suspect that
means there are problems with things like cpu_pm_pmu_setup which are
expect that meaning.

It would be better if the Krait code were not doing that.

> So used_mask contains both an indication for a physical counter in use, and also
> for the group, which is why it's a different size from MAX_L2_CTRS.
> 
> I kept this because it's what's done in armv7. If there's an objection, I can
> move the group used_mask to its own bitmap.

These are two logically independent things, so I would split them.

If nothing else, placing them together has baffled me, and that should
demonstrate it's not trivial to follow.

> >> +static irqreturn_t l2_cache__handle_irq(int irq_num, void *data)
> >> +{
> >> +	struct hml2_pmu *slice = data;
> >> +	u32 ovsr;
> >> +	int idx;
> >> +	struct pt_regs *regs;
> >> +
> >> +	ovsr = hml2_pmu__getreset_ovsr();
> >> +	if (!hml2_pmu__has_overflowed(ovsr))
> >> +		return IRQ_NONE;
> >> +
> >> +	regs = get_irq_regs();
> >> +
> >> +	for (idx = 0; idx < l2cache_pmu.num_counters; idx++) {
> >> +		struct perf_event *event = slice->events[idx];
> >> +		struct hw_perf_event *hwc;
> >> +		struct perf_sample_data data;
> >> +
> >> +		if (!event)
> >> +			continue;
> >> +
> >> +		if (!hml2_pmu__counter_has_overflowed(ovsr, idx))
> >> +			continue;
> >> +
> >> +		l2_cache__event_update_from_slice(event, slice);
> >> +		hwc = &event->hw;
> >> +
> >> +		if (is_sampling_event(event)) {
> >> +			perf_sample_data_init(&data, 0, hwc->last_period);
> > 
> > I don't think sampling makes sense, given this is an uncore PMU and the
> > events are triggered by other CPUs.
> 
> There is origin filtering so events can be attributed to a CPU when sampling.

Ok. I believe that's different from all other uncore PMUs we support
(none of the drivers support sampling, certainly), so I'm not entirely
sure how/if we can make use of that sanely and reliably.

For the timebeing, I think this sampling needs to go, and the event_init
logic needs to reject sampling as with other uncore PMU drivers.

> >> +static inline int mpidr_to_cpu(u32 mpidr)
> >> +{
> >> +	return MPIDR_AFFINITY_LEVEL(mpidr, 0) |
> >> +		(MPIDR_AFFINITY_LEVEL(mpidr, 1) << mpidr_affl1_shift);
> >> +}
> > 
> > I don't follow the logic here.
> > 
> > What exactly are you trying to get? What space does the result exist in?
> > It's neither the Linux logical ID nor the physical ID.
> 
> It's the physical CPU number, constructed out of the MPIDR.AFFL1 (cluster number)
> and AFFL0 (CPU number within the cluster).
>
> The goal is to get a list of logical CPUs associated with each slice. As mentioned
> above, because of partial goods processing the only way to know this is to involve
> the physical id of each CPU from cpu_logical_map().

So this is an implementation defined physical CPU number? Please add a
comment describing it, so as to make clear it's not the MPIDR, Linux
logical ID, ACPI ID, etc.

I take it "partial good processing" is another term for binning, or some
initialisation logic related to it?

> An alternative way to process this would be to find each logical CPU where its
> cluster (MPIDR_AFFINITY_LEVEL( ,1)) matches the slice number from firmware.
> This would remove the need for AFFL0, affl1_shift and the num_cpus_in_cluster
> property, at the expense of searching every logical cpu per slice.
> And that may be a better approach?

This all sounds rather fragile. I can imagine a new revision where the
mapping scheme is completely different.

It would be better if the relationship between a slice and associated
CPUs were described explicitly by the FW, rather than attempting to
derive this detail from the MPIDR.

> >> +static int l2_cache_pmu_probe(struct platform_device *pdev)
> >> +{
> >> +	int result;
> >> +	int err;
> >> +
> >> +	INIT_LIST_HEAD(&l2cache_pmu.pmus);
> >> +
> >> +	l2cache_pmu.pmu = (struct pmu) {
> >> +		.name		= "l2cache",
> >> +		.pmu_enable	= l2_cache__pmu_enable,
> >> +		.pmu_disable	= l2_cache__pmu_disable,
> >> +		.event_init	= l2_cache__event_init,
> >> +		.add		= l2_cache__event_add,
> >> +		.del		= l2_cache__event_del,
> >> +		.start		= l2_cache__event_start,
> >> +		.stop		= l2_cache__event_stop,
> >> +		.read		= l2_cache__event_read,
> >> +		.attr_groups	= l2_cache_pmu_attr_grps,
> >> +	};

One thing I forgot to mention in my earlier comments is that as an
uncore PMU you need to have task_ctx_nr = perf_invalid_context here
also.

That will prevent conflict with the CPU PMU, though also means that this
PMU can't be allowed to open task-following events, nor can it strictly
monitor particular CPUs. See the arm-ccn driver for further details.

> >> +	result = perf_pmu_register(&l2cache_pmu.pmu,
> >> +				   l2cache_pmu.pmu.name, -1);
> >> +
> > 
> > May you have multiple sockets? You propbably want an instance ID on the
> > PMU name.
> > 
> 
> This is just a single socket implementation.

Sure, but is there any chance this may appear in a multi-socket
implementation in future?

The name is exposed to userspace as ABI, and you'd find it difficult to
change it at that point.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] soc: qcom: add l2 cache perf events driver
  2016-06-09 15:56       ` Mark Rutland
@ 2016-06-09 19:41         ` Peter Zijlstra
  2016-06-10 22:34           ` Neil Leeder
  0 siblings, 1 reply; 13+ messages in thread
From: Peter Zijlstra @ 2016-06-09 19:41 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Neil Leeder, Catalin Marinas, WillDeacon, Ingo Molnar,
	Arnaldo Carvalho de Melo, linux-arm-msm, linux-kernel,
	linux-arm-kernel, Mark Langsdorf, Mark Salter, Jon Masters,
	Timur Tabi, cov

On Thu, Jun 09, 2016 at 04:56:16PM +0100, Mark Rutland wrote:
> > >> +static irqreturn_t l2_cache__handle_irq(int irq_num, void *data)
> > >> +{
> > >> +	struct hml2_pmu *slice = data;
> > >> +	u32 ovsr;
> > >> +	int idx;
> > >> +	struct pt_regs *regs;
> > >> +
> > >> +	ovsr = hml2_pmu__getreset_ovsr();
> > >> +	if (!hml2_pmu__has_overflowed(ovsr))
> > >> +		return IRQ_NONE;
> > >> +
> > >> +	regs = get_irq_regs();
> > >> +
> > >> +	for (idx = 0; idx < l2cache_pmu.num_counters; idx++) {
> > >> +		struct perf_event *event = slice->events[idx];
> > >> +		struct hw_perf_event *hwc;
> > >> +		struct perf_sample_data data;
> > >> +
> > >> +		if (!event)
> > >> +			continue;
> > >> +
> > >> +		if (!hml2_pmu__counter_has_overflowed(ovsr, idx))
> > >> +			continue;
> > >> +
> > >> +		l2_cache__event_update_from_slice(event, slice);
> > >> +		hwc = &event->hw;
> > >> +
> > >> +		if (is_sampling_event(event)) {
> > >> +			perf_sample_data_init(&data, 0, hwc->last_period);
> > > 
> > > I don't think sampling makes sense, given this is an uncore PMU and the
> > > events are triggered by other CPUs.
> > 
> > There is origin filtering so events can be attributed to a CPU when sampling.
> 
> Ok. I believe that's different from all other uncore PMUs we support
> (none of the drivers support sampling, certainly), so I'm not entirely
> sure how/if we can make use of that sanely and reliably.

Right; because not only do you need to know which CPU originated the
event, the IRQ must also happen on that CPU. Simply knowing which CPU
triggered it is not enough for sampling.

> For the timebeing, I think this sampling needs to go, and the event_init
> logic needs to reject sampling as with other uncore PMU drivers.

Agreed.

> One thing I forgot to mention in my earlier comments is that as an
> uncore PMU you need to have task_ctx_nr = perf_invalid_context here
> also.

For good reasons, uncore PMUs (as is the case here) count strictly more
than the effect of single CPUs (and thus also the current task). So
attributing it back to a task is nonsense.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] soc: qcom: add l2 cache perf events driver
  2016-06-09 19:41         ` Peter Zijlstra
@ 2016-06-10 22:34           ` Neil Leeder
  0 siblings, 0 replies; 13+ messages in thread
From: Neil Leeder @ 2016-06-10 22:34 UTC (permalink / raw)
  To: Peter Zijlstra, Mark Rutland
  Cc: Catalin Marinas, WillDeacon, Ingo Molnar,
	Arnaldo Carvalho de Melo, linux-arm-msm, linux-kernel,
	linux-arm-kernel, Mark Langsdorf, Mark Salter, Jon Masters,
	Timur Tabi, cov, nleeder



On 6/9/2016 03:41 PM, Peter Zijlstra wrote:
> On Thu, Jun 09, 2016 at 04:56:16PM +0100, Mark Rutland wrote:
>>>>> +static irqreturn_t l2_cache__handle_irq(int irq_num, void *data)
>>>>> +{
>>>>> +	struct hml2_pmu *slice = data;
>>>>> +	u32 ovsr;
>>>>> +	int idx;
>>>>> +	struct pt_regs *regs;
>>>>> +
>>>>> +	ovsr = hml2_pmu__getreset_ovsr();
>>>>> +	if (!hml2_pmu__has_overflowed(ovsr))
>>>>> +		return IRQ_NONE;
>>>>> +
>>>>> +	regs = get_irq_regs();
>>>>> +
>>>>> +	for (idx = 0; idx < l2cache_pmu.num_counters; idx++) {
>>>>> +		struct perf_event *event = slice->events[idx];
>>>>> +		struct hw_perf_event *hwc;
>>>>> +		struct perf_sample_data data;
>>>>> +
>>>>> +		if (!event)
>>>>> +			continue;
>>>>> +
>>>>> +		if (!hml2_pmu__counter_has_overflowed(ovsr, idx))
>>>>> +			continue;
>>>>> +
>>>>> +		l2_cache__event_update_from_slice(event, slice);
>>>>> +		hwc = &event->hw;
>>>>> +
>>>>> +		if (is_sampling_event(event)) {
>>>>> +			perf_sample_data_init(&data, 0, hwc->last_period);
>>>>
>>>> I don't think sampling makes sense, given this is an uncore PMU and the
>>>> events are triggered by other CPUs.
>>>
>>> There is origin filtering so events can be attributed to a CPU when sampling.
>>
>> Ok. I believe that's different from all other uncore PMUs we support
>> (none of the drivers support sampling, certainly), so I'm not entirely
>> sure how/if we can make use of that sanely and reliably.
> 
> Right; because not only do you need to know which CPU originated the
> event, the IRQ must also happen on that CPU. Simply knowing which CPU
> triggered it is not enough for sampling.
> 
>> For the timebeing, I think this sampling needs to go, and the event_init
>> logic needs to reject sampling as with other uncore PMU drivers.
> 
> Agreed.

I want to make sure I understand what the concern is here.
Given the hardware filter which restricts counting to events generated by
a specific CPU, and an irq which is affine to that CPU, sampling and task mode
would seem to work for a single perf use. 
Is the issue only related to multiple concurrent perf uses?

> 
>> One thing I forgot to mention in my earlier comments is that as an
>> uncore PMU you need to have task_ctx_nr = perf_invalid_context here
>> also.
> 
> For good reasons, uncore PMUs (as is the case here) count strictly more
> than the effect of single CPUs (and thus also the current task). So
> attributing it back to a task is nonsense.
> 

Thanks,
Neil

-- 
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2016-06-10 22:34 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-03 21:03 [PATCH 0/2] qcom: add l2 cache perf events driver Neil Leeder
2016-06-03 21:03 ` [PATCH 1/2] perf: allow add to change event state Neil Leeder
2016-06-03 21:38   ` Peter Zijlstra
2016-06-03 21:03 ` [PATCH 2/2] soc: qcom: add l2 cache perf events driver Neil Leeder
2016-06-06  9:51   ` Mark Rutland
2016-06-08 15:16     ` Neil Leeder
2016-06-09 15:56       ` Mark Rutland
2016-06-09 19:41         ` Peter Zijlstra
2016-06-10 22:34           ` Neil Leeder
2016-06-06  9:04 ` [PATCH 0/2] " Mark Rutland
2016-06-08 15:21   ` Neil Leeder
2016-06-08 16:12     ` Mark Rutland
2016-06-08 19:29       ` Neil Leeder

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).