From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1422880AbdDURIy (ORCPT ); Fri, 21 Apr 2017 13:08:54 -0400 Received: from merlin.infradead.org ([205.233.59.134]:40866 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1422841AbdDURIl (ORCPT ); Fri, 21 Apr 2017 13:08:41 -0400 Message-Id: <20170421150734.061251161@infradead.org> User-Agent: quilt/0.63-1 Date: Fri, 21 Apr 2017 16:58:00 +0200 From: Peter Zijlstra To: tglx@linutronix.de, mingo@kernel.org Cc: linux-kernel@vger.kernel.org, ville.syrjala@linux.intel.com, daniel.lezcano@linaro.org, rafael.j.wysocki@intel.com, marta.lofstedt@intel.com, martin.peres@linux.intel.com, pasha.tatashin@oracle.com, peterz@infradead.org, daniel.vetter@ffwll.ch Subject: [PATCH 4/9] x86/tsc,sched/clock,clocksource: Use clocksource watchdog to provide stable sync points References: <20170421145756.305735607@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline; filename=peterz-sched-clock-rubarb-5.patch Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently we keep sched_clock_tick() active for stable TSC in order to keep the per-cpu state semi up-to-date. The (obvious) problem is that by the time we detect TSC is borked, our per-cpu state is also borked. So hook into the clocksource watchdog and call a method after we've found it to still be stable. There's the obvious race where the TSC goes wonky between finding it stable and us running the callback, but closing that is too much work and not really worth it, since we're already detecting TSC wobbles after the fact, so we cannot, per definition, fully avoid funny clock values. And since the watchdog runs less often than the tick, this is also an optimization. Signed-off-by: Peter Zijlstra (Intel) --- arch/x86/kernel/tsc.c | 10 ++++++++++ include/linux/clocksource.h | 1 + include/linux/sched/clock.h | 2 +- kernel/sched/clock.c | 36 +++++++++++++++++++++++++++--------- kernel/time/clocksource.c | 3 +++ 5 files changed, 42 insertions(+), 10 deletions(-) --- a/arch/x86/kernel/tsc.c +++ b/arch/x86/kernel/tsc.c @@ -1129,6 +1129,15 @@ static void tsc_cs_mark_unstable(struct pr_info("Marking TSC unstable due to clocksource watchdog\n"); } +static void tsc_cs_tick_stable(struct clocksource *cs) +{ + if (tsc_unstable) + return; + + if (using_native_sched_clock()) + sched_clock_tick_stable(); +} + /* * .mask MUST be CLOCKSOURCE_MASK(64). See comment above read_tsc() */ @@ -1142,6 +1151,7 @@ static struct clocksource clocksource_ts .archdata = { .vclock_mode = VCLOCK_TSC }, .resume = tsc_resume, .mark_unstable = tsc_cs_mark_unstable, + .tick_stable = tsc_cs_tick_stable, }; void mark_tsc_unstable(char *reason) --- a/include/linux/clocksource.h +++ b/include/linux/clocksource.h @@ -96,6 +96,7 @@ struct clocksource { void (*suspend)(struct clocksource *cs); void (*resume)(struct clocksource *cs); void (*mark_unstable)(struct clocksource *cs); + void (*tick_stable)(struct clocksource *cs); /* private: */ #ifdef CONFIG_CLOCKSOURCE_WATCHDOG --- a/include/linux/sched/clock.h +++ b/include/linux/sched/clock.h @@ -63,8 +63,8 @@ extern void clear_sched_clock_stable(voi */ extern u64 __sched_clock_offset; - extern void sched_clock_tick(void); +extern void sched_clock_tick_stable(void); extern void sched_clock_idle_sleep_event(void); extern void sched_clock_idle_wakeup_event(u64 delta_ns); --- a/kernel/sched/clock.c +++ b/kernel/sched/clock.c @@ -367,20 +367,38 @@ void sched_clock_tick(void) { struct sched_clock_data *scd; + if (sched_clock_stable()) + return; + + if (unlikely(!sched_clock_running)) + return; + WARN_ON_ONCE(!irqs_disabled()); - /* - * Update these values even if sched_clock_stable(), because it can - * become unstable at any point in time at which point we need some - * values to fall back on. - * - * XXX arguably we can skip this if we expose tsc_clocksource_reliable - */ scd = this_scd(); __scd_stamp(scd); + sched_clock_local(scd); +} - if (!sched_clock_stable() && likely(sched_clock_running)) - sched_clock_local(scd); +void sched_clock_tick_stable(void) +{ + u64 gtod, clock; + + if (!sched_clock_stable()) + return; + + /* + * Called under watchdog_lock. + * + * The watchdog just found this TSC to (still) be stable, so now is a + * good moment to update our __gtod_offset. Because once we find the + * TSC to be unstable, any computation will be computing crap. + */ + local_irq_disable(); + gtod = ktime_get_ns(); + clock = sched_clock(); + __gtod_offset = (clock + __sched_clock_offset) - gtod; + local_irq_enable(); } /* --- a/kernel/time/clocksource.c +++ b/kernel/time/clocksource.c @@ -233,6 +233,9 @@ static void clocksource_watchdog(unsigne continue; } + if (cs == curr_clocksource && cs->tick_stable) + cs->tick_stable(cs); + if (!(cs->flags & CLOCK_SOURCE_VALID_FOR_HRES) && (cs->flags & CLOCK_SOURCE_IS_CONTINUOUS) && (watchdog->flags & CLOCK_SOURCE_IS_CONTINUOUS)) {