From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753682AbbC0HnR (ORCPT ); Fri, 27 Mar 2015 03:43:17 -0400 Received: from terminus.zytor.com ([198.137.202.10]:34178 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753288AbbC0HnO (ORCPT ); Fri, 27 Mar 2015 03:43:14 -0400 Date: Fri, 27 Mar 2015 00:41:28 -0700 From: tip-bot for Daniel Thompson Message-ID: Cc: peterz@infradead.org, will.deacon@arm.com, sboyd@codeaurora.org, mingo@kernel.org, john.stultz@linaro.org, linux@arm.linux.org.uk, daniel.thompson@linaro.org, hpa@zytor.com, catalin.marinas@arm.com, linux-kernel@vger.kernel.org, tglx@linutronix.de Reply-To: tglx@linutronix.de, linux-kernel@vger.kernel.org, catalin.marinas@arm.com, hpa@zytor.com, daniel.thompson@linaro.org, john.stultz@linaro.org, linux@arm.linux.org.uk, mingo@kernel.org, peterz@infradead.org, will.deacon@arm.com, sboyd@codeaurora.org In-Reply-To: <1427397806-20889-2-git-send-email-john.stultz@linaro.org> References: <1427397806-20889-2-git-send-email-john.stultz@linaro.org> To: linux-tip-commits@vger.kernel.org Subject: [tip:timers/core] timers, sched/clock: Match scope of read and write seqcounts Git-Commit-ID: 8710e914027e4f64058ebbf0501cc6db3cc8454f X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: 8710e914027e4f64058ebbf0501cc6db3cc8454f Gitweb: http://git.kernel.org/tip/8710e914027e4f64058ebbf0501cc6db3cc8454f Author: Daniel Thompson AuthorDate: Thu, 26 Mar 2015 12:23:22 -0700 Committer: Ingo Molnar CommitDate: Fri, 27 Mar 2015 08:33:56 +0100 timers, sched/clock: Match scope of read and write seqcounts Currently the scope of the raw_write_seqcount_begin/end() in sched_clock_register() far exceeds the scope of the read section in sched_clock(). This gives the impression of safety during cursory review but achieves little. Note that this is likely to be a latent issue at present because sched_clock_register() is typically called before we enable interrupts, however the issue does risk bugs being needlessly introduced as the code evolves. This patch fixes the problem by increasing the scope of the read locking performed by sched_clock() to cover all data modified by sched_clock_register. We also improve clarity by moving writes to struct clock_data that do not impact sched_clock() outside of the critical section. Signed-off-by: Daniel Thompson [ Reworked it slightly to apply to tip/timers/core] Signed-off-by: John Stultz Reviewed-by: Stephen Boyd Acked-by: Peter Zijlstra (Intel) Cc: Catalin Marinas Cc: Peter Zijlstra Cc: Russell King Cc: Thomas Gleixner Cc: Will Deacon Link: http://lkml.kernel.org/r/1427397806-20889-2-git-send-email-john.stultz@linaro.org Signed-off-by: Ingo Molnar --- kernel/time/sched_clock.c | 26 +++++++++++--------------- 1 file changed, 11 insertions(+), 15 deletions(-) diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c index ca3bc5c..1751e95 100644 --- a/kernel/time/sched_clock.c +++ b/kernel/time/sched_clock.c @@ -58,23 +58,21 @@ static inline u64 notrace cyc_to_ns(u64 cyc, u32 mult, u32 shift) unsigned long long notrace sched_clock(void) { - u64 epoch_ns; - u64 epoch_cyc; - u64 cyc; + u64 cyc, res; unsigned long seq; - if (cd.suspended) - return cd.epoch_ns; - do { seq = raw_read_seqcount_begin(&cd.seq); - epoch_cyc = cd.epoch_cyc; - epoch_ns = cd.epoch_ns; + + res = cd.epoch_ns; + if (!cd.suspended) { + cyc = read_sched_clock(); + cyc = (cyc - cd.epoch_cyc) & sched_clock_mask; + res += cyc_to_ns(cyc, cd.mult, cd.shift); + } } while (read_seqcount_retry(&cd.seq, seq)); - cyc = read_sched_clock(); - cyc = (cyc - epoch_cyc) & sched_clock_mask; - return epoch_ns + cyc_to_ns(cyc, cd.mult, cd.shift); + return res; } /* @@ -111,7 +109,6 @@ void __init sched_clock_register(u64 (*read)(void), int bits, { u64 res, wrap, new_mask, new_epoch, cyc, ns; u32 new_mult, new_shift; - ktime_t new_wrap_kt; unsigned long r; char r_unit; @@ -124,10 +121,11 @@ void __init sched_clock_register(u64 (*read)(void), int bits, clocks_calc_mult_shift(&new_mult, &new_shift, rate, NSEC_PER_SEC, 3600); new_mask = CLOCKSOURCE_MASK(bits); + cd.rate = rate; /* calculate how many nanosecs until we risk wrapping */ wrap = clocks_calc_max_nsecs(new_mult, new_shift, 0, new_mask, NULL); - new_wrap_kt = ns_to_ktime(wrap); + cd.wrap_kt = ns_to_ktime(wrap); /* update epoch for new counter and update epoch_ns from old counter*/ new_epoch = read(); @@ -138,8 +136,6 @@ void __init sched_clock_register(u64 (*read)(void), int bits, raw_write_seqcount_begin(&cd.seq); read_sched_clock = read; sched_clock_mask = new_mask; - cd.rate = rate; - cd.wrap_kt = new_wrap_kt; cd.mult = new_mult; cd.shift = new_shift; cd.epoch_cyc = new_epoch;