linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] arm: perf: Prevent wraparound during overflow
@ 2014-11-19 15:52 Daniel Thompson
  2014-11-19 18:11 ` Will Deacon
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Daniel Thompson @ 2014-11-19 15:52 UTC (permalink / raw)
  To: Will Deacon, Russell King
  Cc: Daniel Thompson, linux-arm-kernel, linux-kernel, Peter Zijlstra,
	Paul Mackerras, Ingo Molnar, Arnaldo Carvalho de Melo, patches,
	linaro-kernel, John Stultz, Sumit Semwal

If the overflow threshold for a counter is set above or near the
0xffffffff boundary then the kernel may lose track of the overflow
causing only events that occur *after* the overflow to be recorded.
Specifically the problem occurs when the value of the performance counter
overtakes its original programmed value due to wrap around.

Typical solutions to this problem are either to avoid programming in
values likely to be overtaken or to treat the overflow bit as the 33rd
bit of the counter.

Its somewhat fiddly to refactor the code to correctly handle the 33rd bit
during irqsave sections (context switches for example) so instead we take
the simpler approach of avoiding values likely to be overtaken.

We set the limit to half of max_period because this matches the limit
imposed in __hw_perf_event_init(). This causes a doubling of the interrupt
rate for large threshold values, however even with a very fast counter
ticking at 4GHz the interrupt rate would only be ~1Hz.

Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
---

Notes:
    There is similar code in the arm64 tree which retains the assumptions of
    the original arm code regarding 32-bit wide performance counters. If
    this patch doesn't get beaten up during review I'll also share a similar
    patch for arm64.
    

 arch/arm/kernel/perf_event.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/arm/kernel/perf_event.c b/arch/arm/kernel/perf_event.c
index 266cba46db3e..b50a770f8c99 100644
--- a/arch/arm/kernel/perf_event.c
+++ b/arch/arm/kernel/perf_event.c
@@ -115,8 +115,14 @@ int armpmu_event_set_period(struct perf_event *event)
 		ret = 1;
 	}

-	if (left > (s64)armpmu->max_period)
-		left = armpmu->max_period;
+	/*
+	 * Limit the maximum period to prevent the counter value
+	 * from overtaking the one we are about to program. In
+	 * effect we are reducing max_period to account for
+	 * interrupt latency (and we are being very conservative).
+	 */
+	if (left > (s64)(armpmu->max_period >> 1))
+		left = armpmu->max_period >> 1;

 	local64_set(&hwc->prev_count, (u64)-left);

--
1.9.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2015-01-06 19:46 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-11-19 15:52 [PATCH] arm: perf: Prevent wraparound during overflow Daniel Thompson
2014-11-19 18:11 ` Will Deacon
2014-11-20 12:14   ` Daniel Thompson
2014-11-21 16:24 ` [PATCH v2 0/2] arm+arm64: " Daniel Thompson
2014-11-21 16:24   ` [PATCH v2 1/2] arm: " Daniel Thompson
2014-12-04 10:26     ` Will Deacon
2014-12-04 13:58       ` Daniel Thompson
2015-01-05 14:57     ` Peter Zijlstra
2015-01-05 19:31       ` Daniel Thompson
2015-01-06 19:46         ` Will Deacon
2014-11-21 16:24   ` [PATCH v2 2/2] arm64: " Daniel Thompson
2014-12-04 10:27     ` Will Deacon
2014-12-22  9:39 ` [PATCH 3.19-rc1 v3] arm: " Daniel Thompson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).