All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/2] Early boot time stamps for x86
@ 2017-08-23 20:41 Pavel Tatashin
  2017-08-23 20:41 ` [PATCH v5 1/2] sched/clock: interface to allow timestamps early in boot Pavel Tatashin
  2017-08-23 20:41 ` [PATCH v5 2/2] x86/tsc: use tsc early Pavel Tatashin
  0 siblings, 2 replies; 9+ messages in thread
From: Pavel Tatashin @ 2017-08-23 20:41 UTC (permalink / raw)
  To: pasha.tatashin, x86, linux-kernel, mingo, peterz, tglx, hpa, douly.fnst

changelog
---------
v4 - v5 - Fix compiler warnings on systems with stable clocks.
v3 - v4
	- Fixed tsc_early_fini() call to be in the 2nd patch as reported
	  by Dou Liyang
	- Improved comment before __use_sched_clock_early to explain why
	  we need both booleans.
	- Simplified valid_clock logic in read_boot_clock64().

v2 - v3
	- Addressed comment from Thomas Gleixner
	- Timestamps are available a little later in boot but still much
	  earlier than in mainline. This significantly simplified this
	  work.
v1 - v2
	In patch "x86/tsc: tsc early":
	- added tsc_adjusted_early()
	- fixed 32-bit compile error use do_div()

Adding early boot time stamps support for x86 machines.
SPARC patches for early boot time stamps are already integrated into
mainline linux.

Sample output
-------------
Before:
https://hastebin.com/jadaqukubu.scala

After:
https://hastebin.com/nubipozacu.scala

As seen above, currently timestamps are available from around the time when
"Security Framework" is initialized. But, 26s already passed until we
reached to this point.
Pavel Tatashin (2):
  sched/clock: interface to allow timestamps early in boot
  x86/tsc: use tsc early

 arch/x86/include/asm/tsc.h  |  4 +++
 arch/x86/kernel/setup.c     | 10 +++++--
 arch/x86/kernel/time.c      | 24 +++++++++++++++++
 arch/x86/kernel/tsc.c       | 47 +++++++++++++++++++++++++++++++++
 include/linux/sched/clock.h |  4 +++
 kernel/sched/clock.c        | 64 ++++++++++++++++++++++++++++++++++++++++++++-
 6 files changed, 150 insertions(+), 3 deletions(-)

-- 
2.14.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v5 1/2] sched/clock: interface to allow timestamps early in boot
  2017-08-23 20:41 [PATCH v5 0/2] Early boot time stamps for x86 Pavel Tatashin
@ 2017-08-23 20:41 ` Pavel Tatashin
  2017-08-25 22:54   ` Thomas Gleixner
  2017-08-23 20:41 ` [PATCH v5 2/2] x86/tsc: use tsc early Pavel Tatashin
  1 sibling, 1 reply; 9+ messages in thread
From: Pavel Tatashin @ 2017-08-23 20:41 UTC (permalink / raw)
  To: pasha.tatashin, x86, linux-kernel, mingo, peterz, tglx, hpa, douly.fnst

In Linux printk() can output timestamps next to every line.  This is very
useful for tracking regressions, and finding places that can be optimized.
However, the timestamps are available only later in boot. On smaller
machines it is insignificant amount of time, but on larger it can be many
seconds or even minutes into the boot process.

This patch adds an interface for platforms with unstable sched clock to
show timestamps early in boot. In order to get this functionality a
platform must do:

- Implement u64 sched_clock_early()
  Clock that returns monotonic time

- Call sched_clock_early_init()
  Tells sched clock that the early clock can be used

- Call sched_clock_early_fini()
  Tells sched clock that the early clock is finished, and sched clock
  should hand over the operation to permanent clock.

- Use weak sched_clock_early() interface to determine time from boot in
  arch specific read_boot_clock64()

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
---
 arch/x86/kernel/time.c      | 23 ++++++++++++++++
 include/linux/sched/clock.h |  4 +++
 kernel/sched/clock.c        | 64 ++++++++++++++++++++++++++++++++++++++++++++-
 3 files changed, 90 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/time.c b/arch/x86/kernel/time.c
index e0754cdbad37..be458ea979e7 100644
--- a/arch/x86/kernel/time.c
+++ b/arch/x86/kernel/time.c
@@ -14,6 +14,7 @@
 #include <linux/i8253.h>
 #include <linux/time.h>
 #include <linux/export.h>
+#include <linux/sched/clock.h>
 
 #include <asm/vsyscall.h>
 #include <asm/x86_init.h>
@@ -95,3 +96,25 @@ void __init time_init(void)
 {
 	late_time_init = x86_late_time_init;
 }
+
+/*
+ * Called once during to boot to initialize boot time.
+ */
+void read_boot_clock64(struct timespec64 *ts)
+{
+	u64 ns_boot = sched_clock_early(); /* nsec from boot */
+	struct timespec64 ts_now;
+	bool valid_clock;
+	u64 ns_now;
+
+	/* Time from epoch */
+	read_persistent_clock64(&ts_now);
+	ns_now = timespec64_to_ns(&ts_now);
+	valid_clock = ns_boot && timespec64_valid_strict(&ts_now) &&
+			(ns_now > ns_boot);
+
+	if (!valid_clock)
+		*ts = (struct timespec64){0, 0};
+	else
+		*ts = ns_to_timespec64(ns_now - ns_boot);
+}
diff --git a/include/linux/sched/clock.h b/include/linux/sched/clock.h
index a55600ffdf4b..f8291fa28c0c 100644
--- a/include/linux/sched/clock.h
+++ b/include/linux/sched/clock.h
@@ -63,6 +63,10 @@ extern void sched_clock_tick_stable(void);
 extern void sched_clock_idle_sleep_event(void);
 extern void sched_clock_idle_wakeup_event(void);
 
+void sched_clock_early_init(void);
+void sched_clock_early_fini(void);
+u64 sched_clock_early(void);
+
 /*
  * As outlined in clock.c, provides a fast, high resolution, nanosecond
  * time source that is monotonic per cpu argument and has bounded drift
diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
index ca0f8fc945c6..62e5876a6fe2 100644
--- a/kernel/sched/clock.c
+++ b/kernel/sched/clock.c
@@ -80,9 +80,17 @@ EXPORT_SYMBOL_GPL(sched_clock);
 
 __read_mostly int sched_clock_running;
 
+static bool __read_mostly sched_clock_early_running;
+
 void sched_clock_init(void)
 {
-	sched_clock_running = 1;
+	/*
+	 * We start clock only once early clock is finished, or if early clock
+	 * was not running.
+	 */
+	if (!sched_clock_early_running)
+		sched_clock_running = 1;
+
 }
 
 #ifdef CONFIG_HAVE_UNSTABLE_SCHED_CLOCK
@@ -96,6 +104,16 @@ void sched_clock_init(void)
 static DEFINE_STATIC_KEY_FALSE(__sched_clock_stable);
 static int __sched_clock_stable_early = 1;
 
+/*
+ * Because static branches cannot be altered before jump_label_init() is called,
+ * and early time stamps may be initialized before that, we start with sched
+ * clock early static branch enabled, and global status disabled.  Early in boot
+ * it is decided whether to enable the global status as well (set
+ * sched_clock_early_running to true), and later, when early clock is no longer
+ * needed, the static branch is disabled to keep hot-path fast.
+ */
+static DEFINE_STATIC_KEY_TRUE(__use_sched_clock_early);
+
 /*
  * We want: ktime_get_ns() + __gtod_offset == sched_clock() + __sched_clock_offset
  */
@@ -362,6 +380,11 @@ u64 sched_clock_cpu(int cpu)
 	if (sched_clock_stable())
 		return sched_clock() + __sched_clock_offset;
 
+	if (static_branch_unlikely(&__use_sched_clock_early)) {
+		if (sched_clock_early_running)
+			return sched_clock_early();
+	}
+
 	if (unlikely(!sched_clock_running))
 		return 0ull;
 
@@ -444,6 +467,45 @@ void sched_clock_idle_wakeup_event(void)
 }
 EXPORT_SYMBOL_GPL(sched_clock_idle_wakeup_event);
 
+u64 __weak sched_clock_early(void)
+{
+	return 0;
+}
+
+/*
+ * Is called when sched_clock_early() is about to be finished, notifies sched
+ * clock that after this call sched_clock_early() can't be used.
+ */
+void __init sched_clock_early_fini(void)
+{
+	struct sched_clock_data *scd = this_scd();
+	u64 now_early, now_sched;
+
+	now_early = sched_clock_early();
+	now_sched = sched_clock();
+
+	__gtod_offset = now_early - scd->tick_gtod;
+	__sched_clock_offset = now_early - now_sched;
+
+	sched_clock_early_running = false;
+	static_branch_disable(&__use_sched_clock_early);
+
+	/* Now that early clock is finished, start regular sched clock */
+	sched_clock_init();
+}
+
+/*
+ * Notifies sched clock that early boot clocksource is available, it means that
+ * the current platform has implemented sched_clock_early().
+ *
+ * The early clock is running until we switch to a stable clock, or when we
+ * learn that the stable clock is not available.
+ */
+void __init sched_clock_early_init(void)
+{
+	sched_clock_early_running = true;
+}
+
 #else /* CONFIG_HAVE_UNSTABLE_SCHED_CLOCK */
 
 u64 sched_clock_cpu(int cpu)
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v5 2/2] x86/tsc: use tsc early
  2017-08-23 20:41 [PATCH v5 0/2] Early boot time stamps for x86 Pavel Tatashin
  2017-08-23 20:41 ` [PATCH v5 1/2] sched/clock: interface to allow timestamps early in boot Pavel Tatashin
@ 2017-08-23 20:41 ` Pavel Tatashin
  1 sibling, 0 replies; 9+ messages in thread
From: Pavel Tatashin @ 2017-08-23 20:41 UTC (permalink / raw)
  To: pasha.tatashin, x86, linux-kernel, mingo, peterz, tglx, hpa, douly.fnst

tsc_early_init():
Use verious methods to determine the availability of TSC feature and its
frequency early in boot, and if that is possible initialize TSC and also
call sched_clock_early_init() to be able to get timestamps early in boot.

tsc_early_fini()
Implement the finish part of early tsc feature, print message about the
offset, which can be useful to findout how much time was spent in post and
boot manager, and also call sched_clock_early_fini() to let sched clock
know that

sched_clock_early():
TSC based implementation of weak function that is defined in sched clock.

Call tsc_early_init() to initialize early boot time stamps functionality on
the supported x86 platforms, and call tsc_early_fini() to finish this
feature after permanent tsc has been initialized.

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
---
 arch/x86/include/asm/tsc.h |  4 ++++
 arch/x86/kernel/setup.c    | 10 ++++++++--
 arch/x86/kernel/time.c     |  1 +
 arch/x86/kernel/tsc.c      | 47 ++++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 60 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/tsc.h b/arch/x86/include/asm/tsc.h
index f5e6f1c417df..6dc9618b24e3 100644
--- a/arch/x86/include/asm/tsc.h
+++ b/arch/x86/include/asm/tsc.h
@@ -50,11 +50,15 @@ extern bool tsc_store_and_check_tsc_adjust(bool bootcpu);
 extern void tsc_verify_tsc_adjust(bool resume);
 extern void check_tsc_sync_source(int cpu);
 extern void check_tsc_sync_target(void);
+void tsc_early_init(unsigned int khz);
+void tsc_early_fini(void);
 #else
 static inline bool tsc_store_and_check_tsc_adjust(bool bootcpu) { return false; }
 static inline void tsc_verify_tsc_adjust(bool resume) { }
 static inline void check_tsc_sync_source(int cpu) { }
 static inline void check_tsc_sync_target(void) { }
+static inline void tsc_early_init(unsigned int khz) { }
+static inline void tsc_early_fini(void) { }
 #endif
 
 extern int notsc_setup(char *);
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 3486d0498800..413434d98a23 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -812,7 +812,11 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p)
 	return 0;
 }
 
-static void __init simple_udelay_calibration(void)
+/*
+ * Initialize early tsc to show early boot timestamps, and also loops_per_jiffy
+ * for udelay
+ */
+static void __init early_clock_calibration(void)
 {
 	unsigned int tsc_khz, cpu_khz;
 	unsigned long lpj;
@@ -827,6 +831,8 @@ static void __init simple_udelay_calibration(void)
 	if (!tsc_khz)
 		return;
 
+	tsc_early_init(tsc_khz);
+
 	lpj = tsc_khz * 1000;
 	do_div(lpj, HZ);
 	loops_per_jiffy = lpj;
@@ -1039,7 +1045,7 @@ void __init setup_arch(char **cmdline_p)
 	 */
 	init_hypervisor_platform();
 
-	simple_udelay_calibration();
+	early_clock_calibration();
 
 	x86_init.resources.probe_roms();
 
diff --git a/arch/x86/kernel/time.c b/arch/x86/kernel/time.c
index be458ea979e7..2c82c7e0f747 100644
--- a/arch/x86/kernel/time.c
+++ b/arch/x86/kernel/time.c
@@ -86,6 +86,7 @@ static __init void x86_late_time_init(void)
 {
 	x86_init.timers.timer_init();
 	tsc_init();
+	tsc_early_fini();
 }
 
 /*
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index 796d96bb0821..bd44c2dd4235 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -1263,6 +1263,53 @@ static int __init init_tsc_clocksource(void)
  */
 device_initcall(init_tsc_clocksource);
 
+#ifdef CONFIG_X86_TSC
+
+static struct cyc2ns_data  cyc2ns_early;
+static bool sched_clock_early_enabled;
+
+u64 sched_clock_early(void)
+{
+	u64 ns;
+
+	if (!sched_clock_early_enabled)
+		return 0;
+	ns = mul_u64_u32_shr(rdtsc(), cyc2ns_early.cyc2ns_mul,
+			     cyc2ns_early.cyc2ns_shift);
+	return ns + cyc2ns_early.cyc2ns_offset;
+}
+
+/*
+ * Initialize clock for early time stamps
+ */
+void __init tsc_early_init(unsigned int khz)
+{
+	sched_clock_early_enabled = true;
+	clocks_calc_mult_shift(&cyc2ns_early.cyc2ns_mul,
+			       &cyc2ns_early.cyc2ns_shift,
+			       khz, NSEC_PER_MSEC, 0);
+	cyc2ns_early.cyc2ns_offset = -sched_clock_early();
+	sched_clock_early_init();
+}
+
+void __init tsc_early_fini(void)
+{
+	unsigned long long t;
+	unsigned long r;
+
+	/* We did not have early sched clock if multiplier is 0 */
+	if (cyc2ns_early.cyc2ns_mul == 0)
+		return;
+
+	t = -cyc2ns_early.cyc2ns_offset;
+	r = do_div(t, NSEC_PER_SEC);
+
+	sched_clock_early_fini();
+	pr_info("sched clock early is finished, offset [%lld.%09lds]\n", t, r);
+	sched_clock_early_enabled = false;
+}
+#endif /* CONFIG_X86_TSC */
+
 void __init tsc_init(void)
 {
 	u64 lpj, cyc;
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v5 1/2] sched/clock: interface to allow timestamps early in boot
  2017-08-23 20:41 ` [PATCH v5 1/2] sched/clock: interface to allow timestamps early in boot Pavel Tatashin
@ 2017-08-25 22:54   ` Thomas Gleixner
  2017-08-28 14:17     ` Pasha Tatashin
  0 siblings, 1 reply; 9+ messages in thread
From: Thomas Gleixner @ 2017-08-25 22:54 UTC (permalink / raw)
  To: Pavel Tatashin; +Cc: x86, linux-kernel, mingo, peterz, hpa, douly.fnst

On Wed, 23 Aug 2017, Pavel Tatashin wrote:
> 
> - Use weak sched_clock_early() interface to determine time from boot in
>   arch specific read_boot_clock64()

weak sched_clock_early() is not an interface. The weak implementation is
merily a place holder which can be overridden by a real implementation.

Aside of that this change is completely unrelated to the sched clock core
changes and wants to be split out into a separate patch.

> +/*
> + * Called once during to boot to initialize boot time.
> + */
> +void read_boot_clock64(struct timespec64 *ts)

And because its called only once, it does not need to be marked __init()
and must be kept around forever, right?

> +{
> +	u64 ns_boot = sched_clock_early(); /* nsec from boot */

Please do not use tail comments. They are a horrible habit.

Instead of adding this crap you'd have better spent time in adding proper
comments explaining the reasoning behind this function,

> +	struct timespec64 ts_now;
> +	bool valid_clock;
> +	u64 ns_now;
> +
> +	/* Time from epoch */
> +	read_persistent_clock64(&ts_now);
> +	ns_now = timespec64_to_ns(&ts_now);
> +	valid_clock = ns_boot && timespec64_valid_strict(&ts_now) &&
> +			(ns_now > ns_boot);
> +
> +	if (!valid_clock)
> +		*ts = (struct timespec64){0, 0};
> +	else
> +		*ts = ns_to_timespec64(ns_now - ns_boot);
> +}

This is really broken. Look at the time keeping init code. It does:

     read_persistent_clock64(&now);
     ...
     read_boot_clock64(&boot);
     ...
     tk_set_xtime(tk, &now);
     ...
     set_normalized_timespec64(&tmp, -boot.tv_sec, -boot.tv_nsec);
     tk_set_wall_to_mono(tk, tmp);

Lets assume that the initial read_persistent_clock64() happens right before
the second. For simplicity lets assume we get 1000 seconds since the epoch.

Now read_boot_clock() reads sched_clock_early() which returns 1 second.

The second read_persistent_clock64() returns 1001 seconds since the epoch
because the RTC advanced by now. So the resulting time stamp is going to be
1000s since the epoch.

In case the RTC still returns 100 since the epoch, the resulting time stamp
is 999s since the epoch.

A full second difference. That's time stamp lottery but nothing which we
want to base any boot time analysis on.

You have to come up with something more useful than that.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v5 1/2] sched/clock: interface to allow timestamps early in boot
  2017-08-25 22:54   ` Thomas Gleixner
@ 2017-08-28 14:17     ` Pasha Tatashin
  2017-08-28 15:13       ` Thomas Gleixner
  2017-08-28 15:15       ` Thomas Gleixner
  0 siblings, 2 replies; 9+ messages in thread
From: Pasha Tatashin @ 2017-08-28 14:17 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: x86, linux-kernel, mingo, peterz, hpa, douly.fnst

Hi Thomas,

Thank you for your comments. My replies below.

>> +/*
>> + * Called once during to boot to initialize boot time.
>> + */
>> +void read_boot_clock64(struct timespec64 *ts)
> 
> And because its called only once, it does not need to be marked __init()
> and must be kept around forever, right?

This is because every other architecture implements read_boot_clock64() 
without __init: arm, s390. Beside, the original weak stub does not have 
__init macro. So, I can certainly try to add it for x86, but I am not 
sure what is the behavior once __init section is gone, but weak 
implementation stays.

> 
>> +{
>> +	u64 ns_boot = sched_clock_early(); /* nsec from boot */
> 
> Please do not use tail comments. They are a horrible habit.
> 
> Instead of adding this crap you'd have better spent time in adding proper
> comments explaining the reasoning behind this function,

OK, I will add introduction comment, and remove the tail comment.

> This is really broken. Look at the time keeping init code. It does:
> 
>       read_persistent_clock64(&now);
>       ...
>       read_boot_clock64(&boot);
>       ...
>       tk_set_xtime(tk, &now);
>       ...
>       set_normalized_timespec64(&tmp, -boot.tv_sec, -boot.tv_nsec);
>       tk_set_wall_to_mono(tk, tmp);
> 
> Lets assume that the initial read_persistent_clock64() happens right before
> the second. For simplicity lets assume we get 1000 seconds since the epoch.
> 
> Now read_boot_clock() reads sched_clock_early() which returns 1 second.
> 
> The second read_persistent_clock64() returns 1001 seconds since the epoch
> because the RTC advanced by now. So the resulting time stamp is going to be
> 1000s since the epoch.
> 
> In case the RTC still returns 100 since the epoch, the resulting time stamp
> is 999s since the epoch.
> 
> A full second difference. That's time stamp lottery but nothing which we
> want to base any boot time analysis on.
> 
> You have to come up with something more useful than that.
> 

This makes sense. Changing order in timekeeping_init(void) should take 
care of this:

Change to:

void __init timekeeping_init(void)
{
	/*
	 * We must determine boot timestamp before getting current  	
	 * persistent clock value, because implementation of
	 * read_boot_clock64() might also call the persistent
	 * clock, and a leap second may occur.
	 */

	read_boot_clock64(&boot);
	...
	read_persistent_clock64(&now);
	...
}

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v5 1/2] sched/clock: interface to allow timestamps early in boot
  2017-08-28 14:17     ` Pasha Tatashin
@ 2017-08-28 15:13       ` Thomas Gleixner
  2017-08-28 17:59         ` Pasha Tatashin
  2017-08-28 15:15       ` Thomas Gleixner
  1 sibling, 1 reply; 9+ messages in thread
From: Thomas Gleixner @ 2017-08-28 15:13 UTC (permalink / raw)
  To: Pasha Tatashin; +Cc: x86, linux-kernel, mingo, peterz, hpa, douly.fnst

On Mon, 28 Aug 2017, Pasha Tatashin wrote:
> This makes sense. Changing order in timekeeping_init(void) should take care of
> this:
> 
> Change to:
> 
> void __init timekeeping_init(void)
> {
> 	/*
> 	 * We must determine boot timestamp before getting current  	
> 	 * persistent clock value, because implementation of
> 	 * read_boot_clock64() might also call the persistent
> 	 * clock, and a leap second may occur.
> 	 */
> 
> 	read_boot_clock64(&boot);
> 	...
> 	read_persistent_clock64(&now);

No. That's the same crap just the other way round.

s390 can do that, because the boot timestamp is correlated with the
persistent clock. Your's not so much.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v5 1/2] sched/clock: interface to allow timestamps early in boot
  2017-08-28 14:17     ` Pasha Tatashin
  2017-08-28 15:13       ` Thomas Gleixner
@ 2017-08-28 15:15       ` Thomas Gleixner
  2017-08-28 17:47         ` Pasha Tatashin
  1 sibling, 1 reply; 9+ messages in thread
From: Thomas Gleixner @ 2017-08-28 15:15 UTC (permalink / raw)
  To: Pasha Tatashin; +Cc: x86, linux-kernel, mingo, peterz, hpa, douly.fnst

On Mon, 28 Aug 2017, Pasha Tatashin wrote:
> > > +/*
> > > + * Called once during to boot to initialize boot time.
> > > + */
> > > +void read_boot_clock64(struct timespec64 *ts)
> > 
> > And because its called only once, it does not need to be marked __init()
> > and must be kept around forever, right?
> 
> This is because every other architecture implements read_boot_clock64()
> without __init: arm, s390. Beside, the original weak stub does not have __init
> macro. So, I can certainly try to add it for x86, but I am not sure what is
> the behavior once __init section is gone, but weak implementation stays.

And what about fixing that everywhere?

Just because something is wrong, it does not mean that it needs to be
proliferated.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v5 1/2] sched/clock: interface to allow timestamps early in boot
  2017-08-28 15:15       ` Thomas Gleixner
@ 2017-08-28 17:47         ` Pasha Tatashin
  0 siblings, 0 replies; 9+ messages in thread
From: Pasha Tatashin @ 2017-08-28 17:47 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: x86, linux-kernel, mingo, peterz, hpa, douly.fnst

>>> And because its called only once, it does not need to be marked __init()
>>> and must be kept around forever, right?
>>
>> This is because every other architecture implements read_boot_clock64()
>> without __init: arm, s390. Beside, the original weak stub does not have __init
>> macro. So, I can certainly try to add it for x86, but I am not sure what is
>> the behavior once __init section is gone, but weak implementation stays.
> 
> And what about fixing that everywhere?
> 

Sure, I will update it everywhere.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v5 1/2] sched/clock: interface to allow timestamps early in boot
  2017-08-28 15:13       ` Thomas Gleixner
@ 2017-08-28 17:59         ` Pasha Tatashin
  0 siblings, 0 replies; 9+ messages in thread
From: Pasha Tatashin @ 2017-08-28 17:59 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: x86, linux-kernel, mingo, peterz, hpa, douly.fnst

>> void __init timekeeping_init(void)
>> {
>> 	/*
>> 	 * We must determine boot timestamp before getting current  	
>> 	 * persistent clock value, because implementation of
>> 	 * read_boot_clock64() might also call the persistent
>> 	 * clock, and a leap second may occur.
>> 	 */
>>
>> 	read_boot_clock64(&boot);
>> 	...
>> 	read_persistent_clock64(&now);
> 
> No. That's the same crap just the other way round.
> 
> s390 can do that, because the boot timestamp is correlated with the
> persistent clock. Your's not so much.
> 

OK, how about reading the persistent clock only once, and send it's 
value to use for calculation of boot stamp to read_boot_clock64() via a 
new argument:

read_boot_clock64(&now, &boot);

Does this sound alright or is there a better way?

I would need to update read_boot_clock64() everywhere it is declared to 
add the __init macro, so this extra argument is not going to increase 
number of line changes.

Thank you,
Pasha

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2017-08-28 18:00 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-23 20:41 [PATCH v5 0/2] Early boot time stamps for x86 Pavel Tatashin
2017-08-23 20:41 ` [PATCH v5 1/2] sched/clock: interface to allow timestamps early in boot Pavel Tatashin
2017-08-25 22:54   ` Thomas Gleixner
2017-08-28 14:17     ` Pasha Tatashin
2017-08-28 15:13       ` Thomas Gleixner
2017-08-28 17:59         ` Pasha Tatashin
2017-08-28 15:15       ` Thomas Gleixner
2017-08-28 17:47         ` Pasha Tatashin
2017-08-23 20:41 ` [PATCH v5 2/2] x86/tsc: use tsc early Pavel Tatashin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.