linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v9 0/3] arm64 userspace counter support
@ 2021-08-06 22:51 Rob Herring
  2021-08-06 22:51 ` [PATCH v9 1/3] arm64: perf: Add userspace counter access disable switch Rob Herring
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Rob Herring @ 2021-08-06 22:51 UTC (permalink / raw)
  To: Will Deacon, Mark Rutland, Catalin Marinas, Peter Zijlstra, Ingo Molnar
  Cc: linux-kernel, linux-arm-kernel, Arnaldo Carvalho de Melo,
	Jiri Olsa, Kan Liang, Ian Rogers, Alexander Shishkin,
	honnappa.nagarahalli, Zachary.Leaf, Raphael Gault,
	Jonathan Cameron, Namhyung Kim, Itaru Kitayama, linux-perf-users

Will, Mark,

This depends on:

https://lore.kernel.org/lkml/20210728230230.1911468-1-robh@kernel.org/

Go review that first, thanks!

Another version of arm64 userspace counter access support. This is just the
Arm bits as Arnaldo asked to send the tools bits separately.

The Arm implementation departs from the x86 implementation by requiring
the user to always explicitly request user access (via attr.config1) and
only enables access for task bound events. Rather than trying to lock
down the access as the x86 implementation has been doing, we can start
with only a limited use case enabled and later expand it if needed.

This originally resurrected Raphael's series[1] to enable userspace counter
access on arm64. My previous versions are here[2][3][4][5][6][7][8][9].

Changes in v9:
 - Reworked x86 and perf core to handle user access tracking and call
   .event_mapped() and .event_unmapped() on the CPU with the event like
   other changes to events.
 - Use sysctl instead of sysfs to disable user access.

Changes in v8:
 - Restrict user access to thread bound events which simplifies the
   implementation. A couple of perf core changes (patches 1 and 2) are
   needed to do this.
 - Always require the user to request userspace access.

Changes in v7:
 - Handling of dirty counter leakage and reworking of context switch and
   user access enabling. The .sched_task hook and undef instruction handler
   are now utilized. (Patch 3)
 - Add a userspace disable switch like x86. (Patch 5)

Changes in v6:
 - Reworking of the handling of 64-bit counters and user access. There's
   a new config1 flag to request user access. This takes priority over
   the 64-bit flag and the user will get the maximum size the h/w
   supports without chaining.
 - The libperf evsel mmap struct is stored in its own xyarray
 - New tests for user 64-bit and 32-bit counters
 - Rebase to v5.12-rc2

Changes in v5:
 - Limit enabling/disabling access to CPUs associated with the PMU
   (supported_cpus) and with the mm_struct matching current->active_mm.
   The x86 method of using mm_cpumask doesn't work for arm64 as it is not
   updated.
 - Only set cap_user_rdpmc if event is on current cpu. See patch 2.
 - Create an mmap for every event in an evsel. This results in some changes
   to the libperf mmap API from the last version.
 - Rebase to v5.11-rc2

Changes in v4:
 - Dropped 'arm64: pmu: Add hook to handle pmu-related undefined instructions'.
   The onus is on userspace to pin itself to a homogeneous subset of CPUs
   and avoid any aborts on heterogeneous systems, so the hook is not needed.
 - Make perf_evsel__mmap() take pages rather than bytes for size
 - Fix building arm64 heterogeneous test.

Changes in v3:
 - Dropped removing x86 rdpmc test until libperf tests can run via 'perf test'
 - Added verbose prints for tests
 - Split adding perf_evsel__mmap() to separate patch

The following changes to the arm64 support have been made compared to
Raphael's last version:

The major change is support for heterogeneous systems with some
restrictions. Specifically, userspace must pin itself to like CPUs, open
a specific PMU by type, and use h/w specific events. The tests have been
reworked to demonstrate this.

Chained events are not supported. The problem with supporting chained
events was there's no way to distinguish between a chained event and a
native 64-bit counter. We could add some flag, but do self monitoring
processes really need that? Native 64-bit counters are supported if the
PMU h/w has support. As there's already an explicit ABI to request 64-bit
counters, userspace can request 64-bit counters and if user
access is not enabled, then it must retry with 32-bit counters.

Prior versions broke the build on arm32 (surprisingly never caught by
0-day). As a result, event_mapped and event_unmapped implementations have
been moved into the arm64 code.

There was a bug in that pmc_width was not set in the user page. The tests
now check for this.

The documentation has been converted to rST. I've added sections on
chained events and heterogeneous.

Rob

[1] https://lore.kernel.org/r/20190822144220.27860-1-raphael.gault@arm.com/
[2] https://lore.kernel.org/r/20200707205333.624938-1-robh@kernel.org/
[3] https://lore.kernel.org/r/20200828205614.3391252-1-robh@kernel.org/
[4] https://lore.kernel.org/r/20200911215118.2887710-1-robh@kernel.org/
[5] https://lore.kernel.org/r/20201001140116.651970-1-robh@kernel.org/
[6] https://lore.kernel.org/r/20210114020605.3943992-1-robh@kernel.org/
[7] https://lore.kernel.org/r/20210311000837.3630499-1-robh@kernel.org/
[8] https://lore.kernel.org/r/20210420031511.2348977-1-robh@kernel.org/
[9] https://lore.kernel.org/r/20210517195405.3079458-1-robh@kernel.org/

Raphael Gault (1):
  Documentation: arm64: Document PMU counters access from userspace

Rob Herring (2):
  arm64: perf: Add userspace counter access disable switch
  arm64: perf: Enable PMU counter userspace access for perf event

 Documentation/arm64/perf.rst   |  68 ++++++++++++++-
 arch/arm64/kernel/perf_event.c | 154 +++++++++++++++++++++++++++++++--
 include/linux/perf/arm_pmu.h   |   6 ++
 3 files changed, 219 insertions(+), 9 deletions(-)

-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v9 1/3] arm64: perf: Add userspace counter access disable switch
  2021-08-06 22:51 [PATCH v9 0/3] arm64 userspace counter support Rob Herring
@ 2021-08-06 22:51 ` Rob Herring
  2021-08-24 15:26   ` Will Deacon
  2021-08-06 22:51 ` [PATCH v9 2/3] arm64: perf: Enable PMU counter userspace access for perf event Rob Herring
  2021-08-06 22:51 ` [PATCH v9 3/3] Documentation: arm64: Document PMU counters access from userspace Rob Herring
  2 siblings, 1 reply; 8+ messages in thread
From: Rob Herring @ 2021-08-06 22:51 UTC (permalink / raw)
  To: Will Deacon, Mark Rutland, Catalin Marinas, Peter Zijlstra, Ingo Molnar
  Cc: linux-kernel, linux-arm-kernel, Arnaldo Carvalho de Melo,
	Jiri Olsa, Kan Liang, Ian Rogers, Alexander Shishkin,
	honnappa.nagarahalli, Zachary.Leaf, Raphael Gault,
	Jonathan Cameron, Namhyung Kim, Itaru Kitayama, linux-perf-users

Like x86, some users may want to disable userspace PMU counter
altogether. Add a sysctl 'perf_user_access' file to control userspace
counter access. The default is '0' which is disabled. Writing '1'
enables access.

Note that x86 also supports writing '2' to globally enable user access.
As there's not existing userspace support to worry about, this shouldn't
be necessary for Arm. It could be added later if the need arises.

Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: linux-perf-users@vger.kernel.org
Signed-off-by: Rob Herring <robh@kernel.org>
---
v9:
 - Use sysctl instead of sysfs attr
 - Default to disabled
v8:
 - New patch

---
 arch/arm64/kernel/perf_event.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index d07788dad388..74f77b68f5f0 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -286,6 +286,21 @@ static const struct attribute_group armv8_pmuv3_events_attr_group = {
 PMU_FORMAT_ATTR(event, "config:0-15");
 PMU_FORMAT_ATTR(long, "config1:0");
 
+static int sysctl_perf_user_access __read_mostly;
+
+static struct ctl_table armv8_pmu_sysctl_table[] = {
+	{
+		.procname       = "perf_user_access",
+		.data		= &sysctl_perf_user_access,
+		.maxlen		= sizeof(unsigned int),
+		.mode           = 0644,
+		.proc_handler	= proc_dointvec_minmax,
+		.extra1		= SYSCTL_ZERO,
+		.extra2		= SYSCTL_ONE,
+	},
+	{ }
+};
+
 static inline bool armv8pmu_event_is_64bit(struct perf_event *event)
 {
 	return event->attr.config1 & 0x1;
@@ -1136,6 +1151,8 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name,
 	cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_CAPS] = caps ?
 			caps : &armv8_pmuv3_caps_attr_group;
 
+	register_sysctl("kernel", armv8_pmu_sysctl_table);
+
 	return 0;
 }
 
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v9 2/3] arm64: perf: Enable PMU counter userspace access for perf event
  2021-08-06 22:51 [PATCH v9 0/3] arm64 userspace counter support Rob Herring
  2021-08-06 22:51 ` [PATCH v9 1/3] arm64: perf: Add userspace counter access disable switch Rob Herring
@ 2021-08-06 22:51 ` Rob Herring
  2021-08-24 15:27   ` Will Deacon
  2021-08-06 22:51 ` [PATCH v9 3/3] Documentation: arm64: Document PMU counters access from userspace Rob Herring
  2 siblings, 1 reply; 8+ messages in thread
From: Rob Herring @ 2021-08-06 22:51 UTC (permalink / raw)
  To: Will Deacon, Mark Rutland, Catalin Marinas, Peter Zijlstra, Ingo Molnar
  Cc: linux-kernel, linux-arm-kernel, Arnaldo Carvalho de Melo,
	Jiri Olsa, Kan Liang, Ian Rogers, Alexander Shishkin,
	honnappa.nagarahalli, Zachary.Leaf, Raphael Gault,
	Jonathan Cameron, Namhyung Kim, Itaru Kitayama, linux-perf-users

Arm PMUs can support direct userspace access of counters which allows for
low overhead (i.e. no syscall) self-monitoring of tasks. The same feature
exists on x86 called 'rdpmc'. Unlike x86, userspace access will only be
enabled for thread bound events. This could be extended if needed, but
simplifies the implementation and reduces the chances for any
information leaks (which the x86 implementation suffers from).

When an event is capable of userspace access and has been mmapped, userspace
access is enabled when the event is scheduled on a CPU's PMU. There's some
additional overhead clearing counters when disabled in order to prevent
leaking disabled counter data from other tasks.

Unlike x86, enabling of userspace access must be requested with a new
attr bit: config1:1. If the user requests userspace access and 64-bit
counters, then chaining will be disabled and the user will get the
maximum size counter the underlying h/w can support. The modes for
config1 are as follows:

config1 = 0 : user access disabled and always 32-bit
config1 = 1 : user access disabled and always 64-bit (using chaining if needed)
config1 = 2 : user access enabled and always 32-bit
config1 = 3 : user access enabled and counter size matches underlying counter.

Based on work by Raphael Gault <raphael.gault@arm.com>, but has been
completely re-written.

Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-perf-users@vger.kernel.org
Signed-off-by: Rob Herring <robh@kernel.org>

---
v9:
 - Enabling/disabling of user access is now controlled in .start() and
   mmap hooks which are now called on CPUs that the event is on.
   Depends on rework of perf core and x86 RDPMC code posted here:
   https://lore.kernel.org/lkml/20210728230230.1911468-1-robh@kernel.org/

v8:
 - Rework user access tracking and enabling to be done on task
   context changes using sched_task() hook. This avoids the need for any
   IPIs, mm_switch hooks or undef instr handler.
 - Only support user access when explicitly requested on open and
   only for a thread bound events. This avoids some of the information
   leaks x86 has and simplifies the implementation.

v7:
 - Clear disabled counters when user access is enabled for a task to
   avoid leaking other tasks counter data.
 - Rework context switch handling utilizing sched_task callback
 - Add armv8pmu_event_can_chain() helper
 - Rework config1 flags handling structure
 - Use ARMV8_IDX_CYCLE_COUNTER_USER define for remapped user cycle
   counter index

v6:
 - Add new attr.config1 rdpmc bit for userspace to hint it wants
   userspace access when also requesting 64-bit counters.

v5:
 - Only set cap_user_rdpmc if event is on current cpu
 - Limit enabling/disabling access to CPUs associated with the PMU
   (supported_cpus) and with the mm_struct matching current->active_mm.

v2:
 - Move mapped/unmapped into arm64 code. Fixes arm32.
 - Rebase on cap_user_time_short changes

Changes from Raphael's v4:
  - Drop homogeneous check
  - Disable access for chained counters
  - Set pmc_width in user page
---
 arch/arm64/kernel/perf_event.c | 137 +++++++++++++++++++++++++++++++--
 include/linux/perf/arm_pmu.h   |   6 ++
 2 files changed, 135 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 74f77b68f5f0..66d8bf62e99c 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -285,6 +285,7 @@ static const struct attribute_group armv8_pmuv3_events_attr_group = {
 
 PMU_FORMAT_ATTR(event, "config:0-15");
 PMU_FORMAT_ATTR(long, "config1:0");
+PMU_FORMAT_ATTR(rdpmc, "config1:1");
 
 static int sysctl_perf_user_access __read_mostly;
 
@@ -306,9 +307,15 @@ static inline bool armv8pmu_event_is_64bit(struct perf_event *event)
 	return event->attr.config1 & 0x1;
 }
 
+static inline bool armv8pmu_event_want_user_access(struct perf_event *event)
+{
+	return event->attr.config1 & 0x2;
+}
+
 static struct attribute *armv8_pmuv3_format_attrs[] = {
 	&format_attr_event.attr,
 	&format_attr_long.attr,
+	&format_attr_rdpmc.attr,
 	NULL,
 };
 
@@ -377,7 +384,7 @@ static const struct attribute_group armv8_pmuv3_caps_attr_group = {
  */
 #define	ARMV8_IDX_CYCLE_COUNTER	0
 #define	ARMV8_IDX_COUNTER0	1
-
+#define	ARMV8_IDX_CYCLE_COUNTER_USER	32
 
 /*
  * We unconditionally enable ARMv8.5-PMU long event counter support
@@ -389,6 +396,15 @@ static bool armv8pmu_has_long_event(struct arm_pmu *cpu_pmu)
 	return (cpu_pmu->pmuver >= ID_AA64DFR0_PMUVER_8_5);
 }
 
+static inline bool armv8pmu_event_can_chain(struct perf_event *event)
+{
+	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
+
+	return !(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT) &&
+	       armv8pmu_event_is_64bit(event) &&
+	       !armv8pmu_has_long_event(cpu_pmu);
+}
+
 /*
  * We must chain two programmable counters for 64 bit events,
  * except when we have allocated the 64bit cycle counter (for CPU
@@ -398,11 +414,9 @@ static bool armv8pmu_has_long_event(struct arm_pmu *cpu_pmu)
 static inline bool armv8pmu_event_is_chained(struct perf_event *event)
 {
 	int idx = event->hw.idx;
-	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
 
 	return !WARN_ON(idx < 0) &&
-	       armv8pmu_event_is_64bit(event) &&
-	       !armv8pmu_has_long_event(cpu_pmu) &&
+	       armv8pmu_event_can_chain(event) &&
 	       (idx != ARMV8_IDX_CYCLE_COUNTER);
 }
 
@@ -733,6 +747,35 @@ static inline u32 armv8pmu_getreset_flags(void)
 	return value;
 }
 
+static void armv8pmu_disable_user_access(void)
+{
+	write_sysreg(0, pmuserenr_el0);
+}
+
+static void armv8pmu_enable_user_access(struct arm_pmu *cpu_pmu)
+{
+	struct pmu_hw_events *cpuc = this_cpu_ptr(cpu_pmu->hw_events);
+
+	if (!sysctl_perf_user_access)
+		return;
+
+	if (!bitmap_empty(cpuc->dirty_mask, ARMPMU_MAX_HWEVENTS)) {
+		int i;
+		/* Don't need to clear assigned counters. */
+		bitmap_xor(cpuc->dirty_mask, cpuc->dirty_mask, cpuc->used_mask, ARMPMU_MAX_HWEVENTS);
+
+		for_each_set_bit(i, cpuc->dirty_mask, ARMPMU_MAX_HWEVENTS) {
+			if (i == ARMV8_IDX_CYCLE_COUNTER)
+				write_sysreg(0, pmccntr_el0);
+			else
+				armv8pmu_write_evcntr(i, 0);
+		}
+		bitmap_zero(cpuc->dirty_mask, ARMPMU_MAX_HWEVENTS);
+	}
+
+	write_sysreg(ARMV8_PMU_USERENR_ER | ARMV8_PMU_USERENR_CR, pmuserenr_el0);
+}
+
 static void armv8pmu_enable_event(struct perf_event *event)
 {
 	/*
@@ -776,6 +819,16 @@ static void armv8pmu_disable_event(struct perf_event *event)
 
 static void armv8pmu_start(struct arm_pmu *cpu_pmu)
 {
+	if (sysctl_perf_user_access) {
+		struct perf_cpu_context *cpuctx = this_cpu_ptr(cpu_pmu->pmu.pmu_cpu_context);
+		struct perf_event_context *task_ctx = cpuctx->task_ctx;
+		if (atomic_read(&cpuctx->ctx.nr_user) ||
+		    (task_ctx && atomic_read(&task_ctx->nr_user)))
+			armv8pmu_enable_user_access(cpu_pmu);
+		else
+			armv8pmu_disable_user_access();
+	}
+
 	/* Enable all counters */
 	armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
 }
@@ -893,13 +946,16 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc,
 	if (evtype == ARMV8_PMUV3_PERFCTR_CPU_CYCLES) {
 		if (!test_and_set_bit(ARMV8_IDX_CYCLE_COUNTER, cpuc->used_mask))
 			return ARMV8_IDX_CYCLE_COUNTER;
+		else if (armv8pmu_event_is_64bit(event) &&
+			   armv8pmu_event_want_user_access(event) &&
+			   !armv8pmu_has_long_event(cpu_pmu))
+				return -EAGAIN;
 	}
 
 	/*
 	 * Otherwise use events counters
 	 */
-	if (armv8pmu_event_is_64bit(event) &&
-	    !armv8pmu_has_long_event(cpu_pmu))
+	if (armv8pmu_event_can_chain(event))
 		return	armv8pmu_get_chain_idx(cpuc, cpu_pmu);
 	else
 		return armv8pmu_get_single_idx(cpuc, cpu_pmu);
@@ -911,8 +967,44 @@ static void armv8pmu_clear_event_idx(struct pmu_hw_events *cpuc,
 	int idx = event->hw.idx;
 
 	clear_bit(idx, cpuc->used_mask);
-	if (armv8pmu_event_is_chained(event))
+	set_bit(idx, cpuc->dirty_mask);
+	if (armv8pmu_event_is_chained(event)) {
 		clear_bit(idx - 1, cpuc->used_mask);
+		set_bit(idx - 1, cpuc->dirty_mask);
+	}
+}
+
+static int armv8pmu_access_event_idx(struct perf_event *event)
+{
+	if (!sysctl_perf_user_access ||
+	    !(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT))
+		return 0;
+
+	/*
+	 * We remap the cycle counter index to 32 to
+	 * match the offset applied to the rest of
+	 * the counter indices.
+	 */
+	if (event->hw.idx == ARMV8_IDX_CYCLE_COUNTER)
+		return ARMV8_IDX_CYCLE_COUNTER_USER;
+
+	return event->hw.idx;
+}
+
+static void armv8pmu_event_mapped(struct perf_event *event)
+{
+	if (atomic_read(&event->ctx->nr_user) != 1)
+		return;
+
+	armv8pmu_enable_user_access(to_arm_pmu(event->pmu));
+}
+
+static void armv8pmu_event_unmapped(struct perf_event *event)
+{
+	if (atomic_read(&event->ctx->nr_user) != 1)
+		return;
+
+	armv8pmu_disable_user_access();
 }
 
 /*
@@ -1008,9 +1100,22 @@ static int __armv8_pmuv3_map_event(struct perf_event *event,
 				       &armv8_pmuv3_perf_cache_map,
 				       ARMV8_PMU_EVTYPE_EVENT);
 
-	if (armv8pmu_event_is_64bit(event))
+	/*
+	 * At this point, the counter is not assigned. If a 64-bit counter is
+	 * requested, we must make sure the h/w has 64-bit counters if we set
+	 * the event size to 64-bit because chaining is not supported with
+	 * userspace access. This may still fail later on if the CPU cycle
+	 * counter is in use.
+	 */
+	if (armv8pmu_event_is_64bit(event) &&
+	    (!armv8pmu_event_want_user_access(event) ||
+	     armv8pmu_has_long_event(armpmu) || (hw_event_id == ARMV8_PMUV3_PERFCTR_CPU_CYCLES)))
 		event->hw.flags |= ARMPMU_EVT_64BIT;
 
+	/* Userspace counter access only enabled if requested and a per task event */
+	if (sysctl_perf_user_access && armv8pmu_event_want_user_access(event) && event->hw.target)
+		event->hw.flags |= PERF_EVENT_FLAG_USER_READ_CNT;
+
 	/* Only expose micro/arch events supported by this PMU */
 	if ((hw_event_id > 0) && (hw_event_id < ARMV8_PMUV3_MAX_COMMON_EVENTS)
 	    && test_bit(hw_event_id, armpmu->pmceid_bitmap)) {
@@ -1142,6 +1247,10 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name,
 	cpu_pmu->set_event_filter	= armv8pmu_set_event_filter;
 	cpu_pmu->filter_match		= armv8pmu_filter_match;
 
+	cpu_pmu->pmu.event_idx		= armv8pmu_access_event_idx;
+	cpu_pmu->pmu.event_mapped	= armv8pmu_event_mapped;
+	cpu_pmu->pmu.event_unmapped	= armv8pmu_event_unmapped;
+
 	cpu_pmu->name			= name;
 	cpu_pmu->map_event		= map_event;
 	cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_EVENTS] = events ?
@@ -1318,6 +1427,18 @@ void arch_perf_update_userpage(struct perf_event *event,
 	userpg->cap_user_time = 0;
 	userpg->cap_user_time_zero = 0;
 	userpg->cap_user_time_short = 0;
+	userpg->cap_user_rdpmc = !!userpg->index;
+
+	if (userpg->cap_user_rdpmc) {
+		struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
+
+		if (armv8pmu_event_is_64bit(event) &&
+		    (armv8pmu_has_long_event(cpu_pmu) ||
+		     (userpg->index == ARMV8_IDX_CYCLE_COUNTER_USER)))
+			userpg->pmc_width = 64;
+		else
+			userpg->pmc_width = 32;
+	}
 
 	do {
 		rd = sched_clock_read_begin(&seq);
diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index 505480217cf1..46f09c8d7e9a 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -54,6 +54,12 @@ struct pmu_hw_events {
 	 */
 	DECLARE_BITMAP(used_mask, ARMPMU_MAX_HWEVENTS);
 
+	/*
+	 * A 1 bit for an index indicates that the counter has been used for
+	 * an event and has not been cleared.
+	 */
+	DECLARE_BITMAP(dirty_mask, ARMPMU_MAX_HWEVENTS);
+
 	/*
 	 * Hardware lock to serialize accesses to PMU registers. Needed for the
 	 * read/modify/write sequences.
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v9 3/3] Documentation: arm64: Document PMU counters access from userspace
  2021-08-06 22:51 [PATCH v9 0/3] arm64 userspace counter support Rob Herring
  2021-08-06 22:51 ` [PATCH v9 1/3] arm64: perf: Add userspace counter access disable switch Rob Herring
  2021-08-06 22:51 ` [PATCH v9 2/3] arm64: perf: Enable PMU counter userspace access for perf event Rob Herring
@ 2021-08-06 22:51 ` Rob Herring
  2 siblings, 0 replies; 8+ messages in thread
From: Rob Herring @ 2021-08-06 22:51 UTC (permalink / raw)
  To: Will Deacon, Mark Rutland, Catalin Marinas, Peter Zijlstra, Ingo Molnar
  Cc: linux-kernel, linux-arm-kernel, Arnaldo Carvalho de Melo,
	Jiri Olsa, Kan Liang, Ian Rogers, Alexander Shishkin,
	honnappa.nagarahalli, Zachary.Leaf, Raphael Gault,
	Jonathan Cameron, Namhyung Kim, Itaru Kitayama, linux-perf-users

From: Raphael Gault <raphael.gault@arm.com>

Add documentation to describe the access to the pmu hardware counters from
userspace.

Signed-off-by: Raphael Gault <raphael.gault@arm.com>
Signed-off-by: Rob Herring <robh@kernel.org>
---
v9:
 - No change
v8:
 - Reword that config1:1 must always be set to request user access
v7:
 - Merge into existing arm64 perf.rst
v6:
  - Update the chained event section with attr.config1 details
v2:
  - Update links to test examples

Changes from Raphael's v4:
  - Convert to rSt
  - Update chained event status
  - Add section for heterogeneous systems
---
 Documentation/arm64/perf.rst | 68 +++++++++++++++++++++++++++++++++++-
 1 file changed, 67 insertions(+), 1 deletion(-)

diff --git a/Documentation/arm64/perf.rst b/Documentation/arm64/perf.rst
index b567f177d385..fa8706df2281 100644
--- a/Documentation/arm64/perf.rst
+++ b/Documentation/arm64/perf.rst
@@ -2,7 +2,10 @@
 
 .. _perf_index:
 
-=====================
+====
+Perf
+====
+
 Perf Event Attributes
 =====================
 
@@ -88,3 +91,66 @@ exclude_host. However when using !exclude_hv there is a small blackout
 window at the guest entry/exit where host events are not captured.
 
 On VHE systems there are no blackout windows.
+
+Perf Userspace PMU Hardware Counter Access
+==========================================
+
+Overview
+--------
+The perf userspace tool relies on the PMU to monitor events. It offers an
+abstraction layer over the hardware counters since the underlying
+implementation is cpu-dependent.
+Arm64 allows userspace tools to have access to the registers storing the
+hardware counters' values directly.
+
+This targets specifically self-monitoring tasks in order to reduce the overhead
+by directly accessing the registers without having to go through the kernel.
+
+How-to
+------
+The focus is set on the armv8 PMUv3 which makes sure that the access to the pmu
+registers is enabled and that the userspace has access to the relevant
+information in order to use them.
+
+In order to have access to the hardware counter it is necessary to open the
+event using the perf tool interface with config1:1 attr bit set: the
+sys_perf_event_open syscall returns a fd which can subsequently be used
+with the mmap syscall in order to retrieve a page of memory containing
+information about the event. The PMU driver uses this page to expose to
+the user the hardware counter's index and other necessary data. Using
+this index enables the user to access the PMU registers using the `mrs`
+instruction.
+
+The userspace access is supported in libperf using the perf_evsel__mmap()
+and perf_evsel__read() functions. See `tools/lib/perf/tests/test-evsel.c`_ for
+an example.
+
+About heterogeneous systems
+---------------------------
+On heterogeneous systems such as big.LITTLE, userspace PMU counter access can
+only be enabled when the tasks are pinned to a homogeneous subset of cores and
+the corresponding PMU instance is opened by specifying the 'type' attribute.
+The use of generic event types is not supported in this case.
+
+Have a look at `tools/perf/arch/arm64/tests/user-events.c`_ for an example. It
+can be run using the perf tool to check that the access to the registers works
+correctly from userspace:
+
+.. code-block:: sh
+
+  perf test -v user
+
+About chained events and 64-bit counters
+----------------------------------------
+Chained events are not supported in conjunction with userspace counter
+access. If a 64-bit counter is requested (attr.config1:0) with userspace
+access (attr.config1:1 set), then counter chaining will be disabled. The
+'pmc_width' in the user page will indicate the actual width of the
+counter which could be only 32-bits depending on the event and PMU
+features.
+
+.. Links
+.. _tools/perf/arch/arm64/tests/user-events.c:
+   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/perf/arch/arm64/tests/user-events.c
+.. _tools/lib/perf/tests/test-evsel.c:
+   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/lib/perf/tests/test-evsel.c
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v9 1/3] arm64: perf: Add userspace counter access disable switch
  2021-08-06 22:51 ` [PATCH v9 1/3] arm64: perf: Add userspace counter access disable switch Rob Herring
@ 2021-08-24 15:26   ` Will Deacon
  0 siblings, 0 replies; 8+ messages in thread
From: Will Deacon @ 2021-08-24 15:26 UTC (permalink / raw)
  To: Rob Herring
  Cc: Mark Rutland, Catalin Marinas, Peter Zijlstra, Ingo Molnar,
	linux-kernel, linux-arm-kernel, Arnaldo Carvalho de Melo,
	Jiri Olsa, Kan Liang, Ian Rogers, Alexander Shishkin,
	honnappa.nagarahalli, Zachary.Leaf, Raphael Gault,
	Jonathan Cameron, Namhyung Kim, Itaru Kitayama, linux-perf-users

On Fri, Aug 06, 2021 at 04:51:21PM -0600, Rob Herring wrote:
> Like x86, some users may want to disable userspace PMU counter
> altogether. Add a sysctl 'perf_user_access' file to control userspace
> counter access. The default is '0' which is disabled. Writing '1'
> enables access.
> 
> Note that x86 also supports writing '2' to globally enable user access.
> As there's not existing userspace support to worry about, this shouldn't
> be necessary for Arm. It could be added later if the need arises.
> 
> Cc: Will Deacon <will@kernel.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> Cc: Jiri Olsa <jolsa@redhat.com>
> Cc: Namhyung Kim <namhyung@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: linux-perf-users@vger.kernel.org
> Signed-off-by: Rob Herring <robh@kernel.org>
> ---
> v9:
>  - Use sysctl instead of sysfs attr
>  - Default to disabled
> v8:
>  - New patch
> 
> ---
>  arch/arm64/kernel/perf_event.c | 17 +++++++++++++++++
>  1 file changed, 17 insertions(+)
> 
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index d07788dad388..74f77b68f5f0 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -286,6 +286,21 @@ static const struct attribute_group armv8_pmuv3_events_attr_group = {
>  PMU_FORMAT_ATTR(event, "config:0-15");
>  PMU_FORMAT_ATTR(long, "config1:0");
>  
> +static int sysctl_perf_user_access __read_mostly;
> +
> +static struct ctl_table armv8_pmu_sysctl_table[] = {
> +	{
> +		.procname       = "perf_user_access",
> +		.data		= &sysctl_perf_user_access,
> +		.maxlen		= sizeof(unsigned int),
> +		.mode           = 0644,
> +		.proc_handler	= proc_dointvec_minmax,
> +		.extra1		= SYSCTL_ZERO,
> +		.extra2		= SYSCTL_ONE,
> +	},
> +	{ }
> +};

This should be documented somewhere. Maybe add an entry to
Documentation/admin-guide/sysctl/kernel.rst which points at the doc in patch
3, which needs updating to talk about this control?

Otherwise, looks good:

Acked-by: Will Deacon <will@kernel.org>

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v9 2/3] arm64: perf: Enable PMU counter userspace access for perf event
  2021-08-06 22:51 ` [PATCH v9 2/3] arm64: perf: Enable PMU counter userspace access for perf event Rob Herring
@ 2021-08-24 15:27   ` Will Deacon
  2021-08-24 21:58     ` Rob Herring
  0 siblings, 1 reply; 8+ messages in thread
From: Will Deacon @ 2021-08-24 15:27 UTC (permalink / raw)
  To: Rob Herring
  Cc: Mark Rutland, Catalin Marinas, Peter Zijlstra, Ingo Molnar,
	linux-kernel, linux-arm-kernel, Arnaldo Carvalho de Melo,
	Jiri Olsa, Kan Liang, Ian Rogers, Alexander Shishkin,
	honnappa.nagarahalli, Zachary.Leaf, Raphael Gault,
	Jonathan Cameron, Namhyung Kim, Itaru Kitayama, linux-perf-users

On Fri, Aug 06, 2021 at 04:51:22PM -0600, Rob Herring wrote:
> Arm PMUs can support direct userspace access of counters which allows for
> low overhead (i.e. no syscall) self-monitoring of tasks. The same feature
> exists on x86 called 'rdpmc'. Unlike x86, userspace access will only be
> enabled for thread bound events. This could be extended if needed, but
> simplifies the implementation and reduces the chances for any
> information leaks (which the x86 implementation suffers from).
> 
> When an event is capable of userspace access and has been mmapped, userspace
> access is enabled when the event is scheduled on a CPU's PMU. There's some
> additional overhead clearing counters when disabled in order to prevent
> leaking disabled counter data from other tasks.
> 
> Unlike x86, enabling of userspace access must be requested with a new
> attr bit: config1:1. If the user requests userspace access and 64-bit
> counters, then chaining will be disabled and the user will get the
> maximum size counter the underlying h/w can support. The modes for
> config1 are as follows:
> 
> config1 = 0 : user access disabled and always 32-bit
> config1 = 1 : user access disabled and always 64-bit (using chaining if needed)
> config1 = 2 : user access enabled and always 32-bit
> config1 = 3 : user access enabled and counter size matches underlying counter.
> 
> Based on work by Raphael Gault <raphael.gault@arm.com>, but has been
> completely re-written.
> 
> Cc: Will Deacon <will@kernel.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> Cc: Jiri Olsa <jolsa@redhat.com>
> Cc: Namhyung Kim <namhyung@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: linux-perf-users@vger.kernel.org
> Signed-off-by: Rob Herring <robh@kernel.org>
> 
> ---
> v9:
>  - Enabling/disabling of user access is now controlled in .start() and
>    mmap hooks which are now called on CPUs that the event is on.
>    Depends on rework of perf core and x86 RDPMC code posted here:
>    https://lore.kernel.org/lkml/20210728230230.1911468-1-robh@kernel.org/
> 
> v8:
>  - Rework user access tracking and enabling to be done on task
>    context changes using sched_task() hook. This avoids the need for any
>    IPIs, mm_switch hooks or undef instr handler.
>  - Only support user access when explicitly requested on open and
>    only for a thread bound events. This avoids some of the information
>    leaks x86 has and simplifies the implementation.
> 
> v7:
>  - Clear disabled counters when user access is enabled for a task to
>    avoid leaking other tasks counter data.
>  - Rework context switch handling utilizing sched_task callback
>  - Add armv8pmu_event_can_chain() helper
>  - Rework config1 flags handling structure
>  - Use ARMV8_IDX_CYCLE_COUNTER_USER define for remapped user cycle
>    counter index
> 
> v6:
>  - Add new attr.config1 rdpmc bit for userspace to hint it wants
>    userspace access when also requesting 64-bit counters.
> 
> v5:
>  - Only set cap_user_rdpmc if event is on current cpu
>  - Limit enabling/disabling access to CPUs associated with the PMU
>    (supported_cpus) and with the mm_struct matching current->active_mm.
> 
> v2:
>  - Move mapped/unmapped into arm64 code. Fixes arm32.
>  - Rebase on cap_user_time_short changes
> 
> Changes from Raphael's v4:
>   - Drop homogeneous check
>   - Disable access for chained counters
>   - Set pmc_width in user page
> ---
>  arch/arm64/kernel/perf_event.c | 137 +++++++++++++++++++++++++++++++--
>  include/linux/perf/arm_pmu.h   |   6 ++
>  2 files changed, 135 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index 74f77b68f5f0..66d8bf62e99c 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -285,6 +285,7 @@ static const struct attribute_group armv8_pmuv3_events_attr_group = {
>  
>  PMU_FORMAT_ATTR(event, "config:0-15");
>  PMU_FORMAT_ATTR(long, "config1:0");
> +PMU_FORMAT_ATTR(rdpmc, "config1:1");
>  
>  static int sysctl_perf_user_access __read_mostly;
>  
> @@ -306,9 +307,15 @@ static inline bool armv8pmu_event_is_64bit(struct perf_event *event)
>  	return event->attr.config1 & 0x1;
>  }
>  
> +static inline bool armv8pmu_event_want_user_access(struct perf_event *event)
> +{
> +	return event->attr.config1 & 0x2;
> +}
> +
>  static struct attribute *armv8_pmuv3_format_attrs[] = {
>  	&format_attr_event.attr,
>  	&format_attr_long.attr,
> +	&format_attr_rdpmc.attr,
>  	NULL,
>  };
>  
> @@ -377,7 +384,7 @@ static const struct attribute_group armv8_pmuv3_caps_attr_group = {
>   */
>  #define	ARMV8_IDX_CYCLE_COUNTER	0
>  #define	ARMV8_IDX_COUNTER0	1
> -
> +#define	ARMV8_IDX_CYCLE_COUNTER_USER	32
>  
>  /*
>   * We unconditionally enable ARMv8.5-PMU long event counter support
> @@ -389,6 +396,15 @@ static bool armv8pmu_has_long_event(struct arm_pmu *cpu_pmu)
>  	return (cpu_pmu->pmuver >= ID_AA64DFR0_PMUVER_8_5);
>  }
>  
> +static inline bool armv8pmu_event_can_chain(struct perf_event *event)
> +{
> +	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
> +
> +	return !(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT) &&
> +	       armv8pmu_event_is_64bit(event) &&
> +	       !armv8pmu_has_long_event(cpu_pmu);

Could check against ARMV8_IDX_CYCLE_COUNTER here...

> +}
> +
>  /*
>   * We must chain two programmable counters for 64 bit events,
>   * except when we have allocated the 64bit cycle counter (for CPU
> @@ -398,11 +414,9 @@ static bool armv8pmu_has_long_event(struct arm_pmu *cpu_pmu)
>  static inline bool armv8pmu_event_is_chained(struct perf_event *event)
>  {
>  	int idx = event->hw.idx;
> -	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
>  
>  	return !WARN_ON(idx < 0) &&
> -	       armv8pmu_event_is_64bit(event) &&
> -	       !armv8pmu_has_long_event(cpu_pmu) &&
> +	       armv8pmu_event_can_chain(event) &&
>  	       (idx != ARMV8_IDX_CYCLE_COUNTER);

... then we wouldn't need to here.

>  }
>  
> @@ -733,6 +747,35 @@ static inline u32 armv8pmu_getreset_flags(void)
>  	return value;
>  }
>  
> +static void armv8pmu_disable_user_access(void)
> +{
> +	write_sysreg(0, pmuserenr_el0);
> +}
> +
> +static void armv8pmu_enable_user_access(struct arm_pmu *cpu_pmu)
> +{
> +	struct pmu_hw_events *cpuc = this_cpu_ptr(cpu_pmu->hw_events);
> +
> +	if (!sysctl_perf_user_access)
> +		return;
> +
> +	if (!bitmap_empty(cpuc->dirty_mask, ARMPMU_MAX_HWEVENTS)) {
> +		int i;
> +		/* Don't need to clear assigned counters. */
> +		bitmap_xor(cpuc->dirty_mask, cpuc->dirty_mask, cpuc->used_mask, ARMPMU_MAX_HWEVENTS);
> +
> +		for_each_set_bit(i, cpuc->dirty_mask, ARMPMU_MAX_HWEVENTS) {
> +			if (i == ARMV8_IDX_CYCLE_COUNTER)
> +				write_sysreg(0, pmccntr_el0);
> +			else
> +				armv8pmu_write_evcntr(i, 0);
> +		}

Given that we can't expose individual counters, why isn't this just:

	for_each_clear_bit(i, cpuc->used_mask, ARMPMU_MAX_HWEVENTS)
		...

and we could get rid of the dirty_mask altogether? i.e. just zero everything
that isn't assigned.

> +		bitmap_zero(cpuc->dirty_mask, ARMPMU_MAX_HWEVENTS);
> +	}
> +
> +	write_sysreg(ARMV8_PMU_USERENR_ER | ARMV8_PMU_USERENR_CR, pmuserenr_el0);
> +}
> +
>  static void armv8pmu_enable_event(struct perf_event *event)
>  {
>  	/*
> @@ -776,6 +819,16 @@ static void armv8pmu_disable_event(struct perf_event *event)
>  
>  static void armv8pmu_start(struct arm_pmu *cpu_pmu)
>  {
> +	if (sysctl_perf_user_access) {

armv8pmu_enable_user_access() already checks this.

> +		struct perf_cpu_context *cpuctx = this_cpu_ptr(cpu_pmu->pmu.pmu_cpu_context);
> +		struct perf_event_context *task_ctx = cpuctx->task_ctx;
> +		if (atomic_read(&cpuctx->ctx.nr_user) ||

I thought we only enabled this for per-task events, so not sure why we need
this check. But actually, I don't get why we need any extra logic in this
function at all; why aren't the ->mapped/->unmapped functions sufficient?

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v9 2/3] arm64: perf: Enable PMU counter userspace access for perf event
  2021-08-24 15:27   ` Will Deacon
@ 2021-08-24 21:58     ` Rob Herring
  2021-08-25 19:59       ` Rob Herring
  0 siblings, 1 reply; 8+ messages in thread
From: Rob Herring @ 2021-08-24 21:58 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Catalin Marinas, Peter Zijlstra, Ingo Molnar,
	linux-kernel, linux-arm-kernel, Arnaldo Carvalho de Melo,
	Jiri Olsa, Kan Liang, Ian Rogers, Alexander Shishkin,
	Honnappa Nagarahalli, Zachary.Leaf, Raphael Gault,
	Jonathan Cameron, Namhyung Kim, Itaru Kitayama, linux-perf-users

On Tue, Aug 24, 2021 at 10:27 AM Will Deacon <will@kernel.org> wrote:
>
> On Fri, Aug 06, 2021 at 04:51:22PM -0600, Rob Herring wrote:
> > Arm PMUs can support direct userspace access of counters which allows for
> > low overhead (i.e. no syscall) self-monitoring of tasks. The same feature
> > exists on x86 called 'rdpmc'. Unlike x86, userspace access will only be
> > enabled for thread bound events. This could be extended if needed, but
> > simplifies the implementation and reduces the chances for any
> > information leaks (which the x86 implementation suffers from).
> >
> > When an event is capable of userspace access and has been mmapped, userspace
> > access is enabled when the event is scheduled on a CPU's PMU. There's some
> > additional overhead clearing counters when disabled in order to prevent
> > leaking disabled counter data from other tasks.
> >
> > Unlike x86, enabling of userspace access must be requested with a new
> > attr bit: config1:1. If the user requests userspace access and 64-bit
> > counters, then chaining will be disabled and the user will get the
> > maximum size counter the underlying h/w can support. The modes for
> > config1 are as follows:
> >
> > config1 = 0 : user access disabled and always 32-bit
> > config1 = 1 : user access disabled and always 64-bit (using chaining if needed)
> > config1 = 2 : user access enabled and always 32-bit
> > config1 = 3 : user access enabled and counter size matches underlying counter.
> >
> > Based on work by Raphael Gault <raphael.gault@arm.com>, but has been
> > completely re-written.
> >
> > Cc: Will Deacon <will@kernel.org>
> > Cc: Mark Rutland <mark.rutland@arm.com>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Ingo Molnar <mingo@redhat.com>
> > Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> > Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> > Cc: Jiri Olsa <jolsa@redhat.com>
> > Cc: Namhyung Kim <namhyung@kernel.org>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: linux-arm-kernel@lists.infradead.org
> > Cc: linux-perf-users@vger.kernel.org
> > Signed-off-by: Rob Herring <robh@kernel.org>
> >
> > ---
> > v9:
> >  - Enabling/disabling of user access is now controlled in .start() and
> >    mmap hooks which are now called on CPUs that the event is on.
> >    Depends on rework of perf core and x86 RDPMC code posted here:
> >    https://lore.kernel.org/lkml/20210728230230.1911468-1-robh@kernel.org/
> >
> > v8:
> >  - Rework user access tracking and enabling to be done on task
> >    context changes using sched_task() hook. This avoids the need for any
> >    IPIs, mm_switch hooks or undef instr handler.
> >  - Only support user access when explicitly requested on open and
> >    only for a thread bound events. This avoids some of the information
> >    leaks x86 has and simplifies the implementation.
> >
> > v7:
> >  - Clear disabled counters when user access is enabled for a task to
> >    avoid leaking other tasks counter data.
> >  - Rework context switch handling utilizing sched_task callback
> >  - Add armv8pmu_event_can_chain() helper
> >  - Rework config1 flags handling structure
> >  - Use ARMV8_IDX_CYCLE_COUNTER_USER define for remapped user cycle
> >    counter index
> >
> > v6:
> >  - Add new attr.config1 rdpmc bit for userspace to hint it wants
> >    userspace access when also requesting 64-bit counters.
> >
> > v5:
> >  - Only set cap_user_rdpmc if event is on current cpu
> >  - Limit enabling/disabling access to CPUs associated with the PMU
> >    (supported_cpus) and with the mm_struct matching current->active_mm.
> >
> > v2:
> >  - Move mapped/unmapped into arm64 code. Fixes arm32.
> >  - Rebase on cap_user_time_short changes
> >
> > Changes from Raphael's v4:
> >   - Drop homogeneous check
> >   - Disable access for chained counters
> >   - Set pmc_width in user page
> > ---
> >  arch/arm64/kernel/perf_event.c | 137 +++++++++++++++++++++++++++++++--
> >  include/linux/perf/arm_pmu.h   |   6 ++
> >  2 files changed, 135 insertions(+), 8 deletions(-)
> >
> > diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> > index 74f77b68f5f0..66d8bf62e99c 100644
> > --- a/arch/arm64/kernel/perf_event.c
> > +++ b/arch/arm64/kernel/perf_event.c
> > @@ -285,6 +285,7 @@ static const struct attribute_group armv8_pmuv3_events_attr_group = {
> >
> >  PMU_FORMAT_ATTR(event, "config:0-15");
> >  PMU_FORMAT_ATTR(long, "config1:0");
> > +PMU_FORMAT_ATTR(rdpmc, "config1:1");
> >
> >  static int sysctl_perf_user_access __read_mostly;
> >
> > @@ -306,9 +307,15 @@ static inline bool armv8pmu_event_is_64bit(struct perf_event *event)
> >       return event->attr.config1 & 0x1;
> >  }
> >
> > +static inline bool armv8pmu_event_want_user_access(struct perf_event *event)
> > +{
> > +     return event->attr.config1 & 0x2;
> > +}
> > +
> >  static struct attribute *armv8_pmuv3_format_attrs[] = {
> >       &format_attr_event.attr,
> >       &format_attr_long.attr,
> > +     &format_attr_rdpmc.attr,
> >       NULL,
> >  };
> >
> > @@ -377,7 +384,7 @@ static const struct attribute_group armv8_pmuv3_caps_attr_group = {
> >   */
> >  #define      ARMV8_IDX_CYCLE_COUNTER 0
> >  #define      ARMV8_IDX_COUNTER0      1
> > -
> > +#define      ARMV8_IDX_CYCLE_COUNTER_USER    32
> >
> >  /*
> >   * We unconditionally enable ARMv8.5-PMU long event counter support
> > @@ -389,6 +396,15 @@ static bool armv8pmu_has_long_event(struct arm_pmu *cpu_pmu)
> >       return (cpu_pmu->pmuver >= ID_AA64DFR0_PMUVER_8_5);
> >  }
> >
> > +static inline bool armv8pmu_event_can_chain(struct perf_event *event)
> > +{
> > +     struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
> > +
> > +     return !(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT) &&
> > +            armv8pmu_event_is_64bit(event) &&
> > +            !armv8pmu_has_long_event(cpu_pmu);
>
> Could check against ARMV8_IDX_CYCLE_COUNTER here...
>
> > +}
> > +
> >  /*
> >   * We must chain two programmable counters for 64 bit events,
> >   * except when we have allocated the 64bit cycle counter (for CPU
> > @@ -398,11 +414,9 @@ static bool armv8pmu_has_long_event(struct arm_pmu *cpu_pmu)
> >  static inline bool armv8pmu_event_is_chained(struct perf_event *event)
> >  {
> >       int idx = event->hw.idx;
> > -     struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
> >
> >       return !WARN_ON(idx < 0) &&
> > -            armv8pmu_event_is_64bit(event) &&
> > -            !armv8pmu_has_long_event(cpu_pmu) &&
> > +            armv8pmu_event_can_chain(event) &&
> >              (idx != ARMV8_IDX_CYCLE_COUNTER);
>
> ... then we wouldn't need to here.

Hum, well armv8pmu_event_can_chain() is supposed to answer is there
any possibility that the event will ever be chained regardless of
whether it's assigned or not. Changing it would mostly work for idx<0,
but it could return the wrong answer if idx ==
ARMV8_IDX_CYCLE_COUNTER. However, that won't happen in the current
code (just as the WARN_ON won't). If we're going to smear the meaning,
then we only need one function here if we get rid of the WARN_ON. We
can call it armv8pmu_event_is_chained_or_might_be_chained() to make it
clear... JK (on the name)

>
> >  }
> >
> > @@ -733,6 +747,35 @@ static inline u32 armv8pmu_getreset_flags(void)
> >       return value;
> >  }
> >
> > +static void armv8pmu_disable_user_access(void)
> > +{
> > +     write_sysreg(0, pmuserenr_el0);
> > +}
> > +
> > +static void armv8pmu_enable_user_access(struct arm_pmu *cpu_pmu)
> > +{
> > +     struct pmu_hw_events *cpuc = this_cpu_ptr(cpu_pmu->hw_events);
> > +
> > +     if (!sysctl_perf_user_access)
> > +             return;
> > +
> > +     if (!bitmap_empty(cpuc->dirty_mask, ARMPMU_MAX_HWEVENTS)) {
> > +             int i;
> > +             /* Don't need to clear assigned counters. */
> > +             bitmap_xor(cpuc->dirty_mask, cpuc->dirty_mask, cpuc->used_mask, ARMPMU_MAX_HWEVENTS);
> > +
> > +             for_each_set_bit(i, cpuc->dirty_mask, ARMPMU_MAX_HWEVENTS) {
> > +                     if (i == ARMV8_IDX_CYCLE_COUNTER)
> > +                             write_sysreg(0, pmccntr_el0);
> > +                     else
> > +                             armv8pmu_write_evcntr(i, 0);
> > +             }
>
> Given that we can't expose individual counters, why isn't this just:
>
>         for_each_clear_bit(i, cpuc->used_mask, ARMPMU_MAX_HWEVENTS)
>                 ...
>
> and we could get rid of the dirty_mask altogether? i.e. just zero everything
> that isn't assigned.

Sure. It's just an optimization following what x86 did.

Though we'd want to limit it to num_events, not ARMPMU_MAX_HWEVENTS.
No point in clearing nonexistent counters.

>
> > +             bitmap_zero(cpuc->dirty_mask, ARMPMU_MAX_HWEVENTS);
> > +     }
> > +
> > +     write_sysreg(ARMV8_PMU_USERENR_ER | ARMV8_PMU_USERENR_CR, pmuserenr_el0);
> > +}
> > +
> >  static void armv8pmu_enable_event(struct perf_event *event)
> >  {
> >       /*
> > @@ -776,6 +819,16 @@ static void armv8pmu_disable_event(struct perf_event *event)
> >
> >  static void armv8pmu_start(struct arm_pmu *cpu_pmu)
> >  {
> > +     if (sysctl_perf_user_access) {
>
> armv8pmu_enable_user_access() already checks this.

Yes, because not all callers (event_mapped) check it. I put it here so
we check it first and avoid checking all the subsequent conditions
when the feature is disabled. Though I guess the ordering here is not
guaranteed.

> > +             struct perf_cpu_context *cpuctx = this_cpu_ptr(cpu_pmu->pmu.pmu_cpu_context);
> > +             struct perf_event_context *task_ctx = cpuctx->task_ctx;
> > +             if (atomic_read(&cpuctx->ctx.nr_user) ||
>
> I thought we only enabled this for per-task events, so not sure why we need
> this check. But actually, I don't get why we need any extra logic in this
> function at all; why aren't the ->mapped/->unmapped functions sufficient?

Yes, checking cpuctx->ctx.nr_user can be dropped. I went back and
forth on this as this is now the only thing we have to do to allow per
cpu events. IOW, not supporting per cpu events doesn't simplify things
with this version. The main reason now is one less thing exposed to
user space, and user space reading of per cpu events is somewhat
pointless IMO.

The ->mapped/->unmapped functions are only called when we mmap/munmap
the event. In addition to enabling/disabling access at that point, we
need to enable/disable access every time the event's context is
scheduled on or off the PMU. This is replacing doing it in switch_mm()
which you didn't like. The sched_task() hook didn't work either as it
is not called at the right times. That could be fixed in the core as
that's what I did in v8, but doing it in enable() turns out to be
simpler.

Rob

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v9 2/3] arm64: perf: Enable PMU counter userspace access for perf event
  2021-08-24 21:58     ` Rob Herring
@ 2021-08-25 19:59       ` Rob Herring
  0 siblings, 0 replies; 8+ messages in thread
From: Rob Herring @ 2021-08-25 19:59 UTC (permalink / raw)
  To: Will Deacon, Mark Rutland
  Cc: Catalin Marinas, Peter Zijlstra, Ingo Molnar, linux-kernel,
	linux-arm-kernel, Arnaldo Carvalho de Melo, Jiri Olsa, Kan Liang,
	Ian Rogers, Alexander Shishkin, Honnappa Nagarahalli,
	Zachary.Leaf, Raphael Gault, Jonathan Cameron, Namhyung Kim,
	Itaru Kitayama, linux-perf-users

On Tue, Aug 24, 2021 at 4:58 PM Rob Herring <robh@kernel.org> wrote:
>
> On Tue, Aug 24, 2021 at 10:27 AM Will Deacon <will@kernel.org> wrote:
> >
> > On Fri, Aug 06, 2021 at 04:51:22PM -0600, Rob Herring wrote:
> > > Arm PMUs can support direct userspace access of counters which allows for
> > > low overhead (i.e. no syscall) self-monitoring of tasks. The same feature
> > > exists on x86 called 'rdpmc'. Unlike x86, userspace access will only be
> > > enabled for thread bound events. This could be extended if needed, but
> > > simplifies the implementation and reduces the chances for any
> > > information leaks (which the x86 implementation suffers from).
> > >
> > > When an event is capable of userspace access and has been mmapped, userspace
> > > access is enabled when the event is scheduled on a CPU's PMU. There's some
> > > additional overhead clearing counters when disabled in order to prevent
> > > leaking disabled counter data from other tasks.
> > >
> > > Unlike x86, enabling of userspace access must be requested with a new
> > > attr bit: config1:1. If the user requests userspace access and 64-bit
> > > counters, then chaining will be disabled and the user will get the
> > > maximum size counter the underlying h/w can support. The modes for
> > > config1 are as follows:
> > >
> > > config1 = 0 : user access disabled and always 32-bit
> > > config1 = 1 : user access disabled and always 64-bit (using chaining if needed)
> > > config1 = 2 : user access enabled and always 32-bit
> > > config1 = 3 : user access enabled and counter size matches underlying counter.
> > >
> > > Based on work by Raphael Gault <raphael.gault@arm.com>, but has been
> > > completely re-written.
> > >
> > > Cc: Will Deacon <will@kernel.org>
> > > Cc: Mark Rutland <mark.rutland@arm.com>
> > > Cc: Peter Zijlstra <peterz@infradead.org>
> > > Cc: Ingo Molnar <mingo@redhat.com>
> > > Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> > > Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> > > Cc: Jiri Olsa <jolsa@redhat.com>
> > > Cc: Namhyung Kim <namhyung@kernel.org>
> > > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > > Cc: linux-arm-kernel@lists.infradead.org
> > > Cc: linux-perf-users@vger.kernel.org
> > > Signed-off-by: Rob Herring <robh@kernel.org>
> > >
> > > ---
> > > v9:
> > >  - Enabling/disabling of user access is now controlled in .start() and
> > >    mmap hooks which are now called on CPUs that the event is on.
> > >    Depends on rework of perf core and x86 RDPMC code posted here:
> > >    https://lore.kernel.org/lkml/20210728230230.1911468-1-robh@kernel.org/
> > >
> > > v8:
> > >  - Rework user access tracking and enabling to be done on task
> > >    context changes using sched_task() hook. This avoids the need for any
> > >    IPIs, mm_switch hooks or undef instr handler.
> > >  - Only support user access when explicitly requested on open and
> > >    only for a thread bound events. This avoids some of the information
> > >    leaks x86 has and simplifies the implementation.
> > >
> > > v7:
> > >  - Clear disabled counters when user access is enabled for a task to
> > >    avoid leaking other tasks counter data.
> > >  - Rework context switch handling utilizing sched_task callback
> > >  - Add armv8pmu_event_can_chain() helper
> > >  - Rework config1 flags handling structure
> > >  - Use ARMV8_IDX_CYCLE_COUNTER_USER define for remapped user cycle
> > >    counter index
> > >
> > > v6:
> > >  - Add new attr.config1 rdpmc bit for userspace to hint it wants
> > >    userspace access when also requesting 64-bit counters.
> > >
> > > v5:
> > >  - Only set cap_user_rdpmc if event is on current cpu
> > >  - Limit enabling/disabling access to CPUs associated with the PMU
> > >    (supported_cpus) and with the mm_struct matching current->active_mm.
> > >
> > > v2:
> > >  - Move mapped/unmapped into arm64 code. Fixes arm32.
> > >  - Rebase on cap_user_time_short changes
> > >
> > > Changes from Raphael's v4:
> > >   - Drop homogeneous check
> > >   - Disable access for chained counters
> > >   - Set pmc_width in user page
> > > ---
> > >  arch/arm64/kernel/perf_event.c | 137 +++++++++++++++++++++++++++++++--
> > >  include/linux/perf/arm_pmu.h   |   6 ++
> > >  2 files changed, 135 insertions(+), 8 deletions(-)
> > >
> > > diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> > > index 74f77b68f5f0..66d8bf62e99c 100644
> > > --- a/arch/arm64/kernel/perf_event.c
> > > +++ b/arch/arm64/kernel/perf_event.c
> > > @@ -285,6 +285,7 @@ static const struct attribute_group armv8_pmuv3_events_attr_group = {
> > >
> > >  PMU_FORMAT_ATTR(event, "config:0-15");
> > >  PMU_FORMAT_ATTR(long, "config1:0");
> > > +PMU_FORMAT_ATTR(rdpmc, "config1:1");
> > >
> > >  static int sysctl_perf_user_access __read_mostly;
> > >
> > > @@ -306,9 +307,15 @@ static inline bool armv8pmu_event_is_64bit(struct perf_event *event)
> > >       return event->attr.config1 & 0x1;
> > >  }
> > >
> > > +static inline bool armv8pmu_event_want_user_access(struct perf_event *event)
> > > +{
> > > +     return event->attr.config1 & 0x2;
> > > +}
> > > +
> > >  static struct attribute *armv8_pmuv3_format_attrs[] = {
> > >       &format_attr_event.attr,
> > >       &format_attr_long.attr,
> > > +     &format_attr_rdpmc.attr,
> > >       NULL,
> > >  };
> > >
> > > @@ -377,7 +384,7 @@ static const struct attribute_group armv8_pmuv3_caps_attr_group = {
> > >   */
> > >  #define      ARMV8_IDX_CYCLE_COUNTER 0
> > >  #define      ARMV8_IDX_COUNTER0      1
> > > -
> > > +#define      ARMV8_IDX_CYCLE_COUNTER_USER    32
> > >
> > >  /*
> > >   * We unconditionally enable ARMv8.5-PMU long event counter support
> > > @@ -389,6 +396,15 @@ static bool armv8pmu_has_long_event(struct arm_pmu *cpu_pmu)
> > >       return (cpu_pmu->pmuver >= ID_AA64DFR0_PMUVER_8_5);
> > >  }
> > >
> > > +static inline bool armv8pmu_event_can_chain(struct perf_event *event)
> > > +{
> > > +     struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
> > > +
> > > +     return !(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT) &&
> > > +            armv8pmu_event_is_64bit(event) &&
> > > +            !armv8pmu_has_long_event(cpu_pmu);
> >
> > Could check against ARMV8_IDX_CYCLE_COUNTER here...
> >
> > > +}
> > > +
> > >  /*
> > >   * We must chain two programmable counters for 64 bit events,
> > >   * except when we have allocated the 64bit cycle counter (for CPU
> > > @@ -398,11 +414,9 @@ static bool armv8pmu_has_long_event(struct arm_pmu *cpu_pmu)
> > >  static inline bool armv8pmu_event_is_chained(struct perf_event *event)
> > >  {
> > >       int idx = event->hw.idx;
> > > -     struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
> > >
> > >       return !WARN_ON(idx < 0) &&
> > > -            armv8pmu_event_is_64bit(event) &&
> > > -            !armv8pmu_has_long_event(cpu_pmu) &&
> > > +            armv8pmu_event_can_chain(event) &&
> > >              (idx != ARMV8_IDX_CYCLE_COUNTER);
> >
> > ... then we wouldn't need to here.
>
> Hum, well armv8pmu_event_can_chain() is supposed to answer is there
> any possibility that the event will ever be chained regardless of
> whether it's assigned or not. Changing it would mostly work for idx<0,
> but it could return the wrong answer if idx ==
> ARMV8_IDX_CYCLE_COUNTER. However, that won't happen in the current
> code (just as the WARN_ON won't). If we're going to smear the meaning,
> then we only need one function here if we get rid of the WARN_ON. We
> can call it armv8pmu_event_is_chained_or_might_be_chained() to make it
> clear... JK (on the name)
>
> >
> > >  }
> > >
> > > @@ -733,6 +747,35 @@ static inline u32 armv8pmu_getreset_flags(void)
> > >       return value;
> > >  }
> > >
> > > +static void armv8pmu_disable_user_access(void)
> > > +{
> > > +     write_sysreg(0, pmuserenr_el0);
> > > +}
> > > +
> > > +static void armv8pmu_enable_user_access(struct arm_pmu *cpu_pmu)
> > > +{
> > > +     struct pmu_hw_events *cpuc = this_cpu_ptr(cpu_pmu->hw_events);
> > > +
> > > +     if (!sysctl_perf_user_access)
> > > +             return;
> > > +
> > > +     if (!bitmap_empty(cpuc->dirty_mask, ARMPMU_MAX_HWEVENTS)) {
> > > +             int i;
> > > +             /* Don't need to clear assigned counters. */
> > > +             bitmap_xor(cpuc->dirty_mask, cpuc->dirty_mask, cpuc->used_mask, ARMPMU_MAX_HWEVENTS);
> > > +
> > > +             for_each_set_bit(i, cpuc->dirty_mask, ARMPMU_MAX_HWEVENTS) {
> > > +                     if (i == ARMV8_IDX_CYCLE_COUNTER)
> > > +                             write_sysreg(0, pmccntr_el0);
> > > +                     else
> > > +                             armv8pmu_write_evcntr(i, 0);
> > > +             }
> >
> > Given that we can't expose individual counters, why isn't this just:
> >
> >         for_each_clear_bit(i, cpuc->used_mask, ARMPMU_MAX_HWEVENTS)
> >                 ...
> >
> > and we could get rid of the dirty_mask altogether? i.e. just zero everything
> > that isn't assigned.
>
> Sure. It's just an optimization following what x86 did.
>
> Though we'd want to limit it to num_events, not ARMPMU_MAX_HWEVENTS.
> No point in clearing nonexistent counters.
>
> >
> > > +             bitmap_zero(cpuc->dirty_mask, ARMPMU_MAX_HWEVENTS);
> > > +     }
> > > +
> > > +     write_sysreg(ARMV8_PMU_USERENR_ER | ARMV8_PMU_USERENR_CR, pmuserenr_el0);
> > > +}
> > > +
> > >  static void armv8pmu_enable_event(struct perf_event *event)
> > >  {
> > >       /*
> > > @@ -776,6 +819,16 @@ static void armv8pmu_disable_event(struct perf_event *event)
> > >
> > >  static void armv8pmu_start(struct arm_pmu *cpu_pmu)
> > >  {
> > > +     if (sysctl_perf_user_access) {
> >
> > armv8pmu_enable_user_access() already checks this.
>
> Yes, because not all callers (event_mapped) check it. I put it here so
> we check it first and avoid checking all the subsequent conditions
> when the feature is disabled. Though I guess the ordering here is not
> guaranteed.

It also serves to avoid writing pmuserenr_el0 when user access is
disabled. However, there is a problem here when the sysctl is changed
from enabled to disabled. We stop touching pmuserenr_el0, so it may
get left enabled. So either we need an IPI in the sysctl to disable
access everywhere (like x86) or we need to do something like this:

struct perf_cpu_context *cpuctx = this_cpu_ptr(cpu_pmu->pmu.pmu_cpu_context);
struct perf_event_context *task_ctx = cpuctx->task_ctx;
if (sysctl_perf_user_access && task_ctx && atomic_read(&task_ctx->nr_user))
        armv8pmu_enable_user_access(cpu_pmu);
else
        armv8pmu_disable_user_access();

I guess a third option is make the sysctl sticky. Once it gets
enabled, it stays enabled.

Rob

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-08-25 20:02 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-06 22:51 [PATCH v9 0/3] arm64 userspace counter support Rob Herring
2021-08-06 22:51 ` [PATCH v9 1/3] arm64: perf: Add userspace counter access disable switch Rob Herring
2021-08-24 15:26   ` Will Deacon
2021-08-06 22:51 ` [PATCH v9 2/3] arm64: perf: Enable PMU counter userspace access for perf event Rob Herring
2021-08-24 15:27   ` Will Deacon
2021-08-24 21:58     ` Rob Herring
2021-08-25 19:59       ` Rob Herring
2021-08-06 22:51 ` [PATCH v9 3/3] Documentation: arm64: Document PMU counters access from userspace Rob Herring

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).