linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7 0/6] Early boot time stamps for x86
@ 2017-11-02 17:26 Pavel Tatashin
  2017-11-02 17:26 ` [PATCH v7 1/6] x86/tsc: remove tsc_disabled flag Pavel Tatashin
                   ` (5 more replies)
  0 siblings, 6 replies; 15+ messages in thread
From: Pavel Tatashin @ 2017-11-02 17:26 UTC (permalink / raw)
  To: steven.sistare, daniel.m.jordan, linux, schwidefsky,
	heiko.carstens, john.stultz, sboyd, x86, linux-kernel, mingo,
	tglx, hpa, douly.fnst

changelog
---------
v7 - v6
	- Removed tsc_disabled flag, now notsc is equivalent of
	  tsc=unstable
	- Simplified changes to sched/clock.c, by removing the
	  sched_clock_early() and friends as requested by Peter Zijlstra.
	  We know always use sched_clock()
	- Modified x86 sched_clock() to return either early boot time or
	  regular.
	- Added another example why ealry boot time is important

v5 - v6
	- Added a new patch:
		time: sync read_boot_clock64() with persistent clock
	  Which fixes missing __init macro, and enabled time discrepancy
	  fix that was noted by Thomas Gleixner
	- Split "x86/time: read_boot_clock64() implementation" into a
	  separate patch
v4 - v5
	- Fix compiler warnings on systems with stable clocks.

v3 - v4
	- Fixed tsc_early_fini() call to be in the 2nd patch as reported
	  by Dou Liyang
	- Improved comment before __use_sched_clock_early to explain why
	  we need both booleans.
	- Simplified valid_clock logic in read_boot_clock64().

v2 - v3
	- Addressed comment from Thomas Gleixner
	- Timestamps are available a little later in boot but still much
	  earlier than in mainline. This significantly simplified this
	  work.

v1 - v2
	In patch "x86/tsc: tsc early":
	- added tsc_adjusted_early()
	- fixed 32-bit compile error use do_div()

Adding early boot time stamps support for x86 machines.
SPARC patches for early boot time stamps are already integrated into
mainline linux.

Sample output
-------------
Before:
https://hastebin.com/jadaqukubu.scala

After:
https://hastebin.com/nubipozacu.scala

For more exaples how early time stamps are used, see this work:

Example 1:
https://lwn.net/Articles/734374/
- Without early boot time stamps we would not know about the extra time
  that is spent zeroing struct pages early in boot even when deferred
  page initialization.

Example 2:
https://patchwork.kernel.org/patch/10021247/
- If early boot timestamps were available, the engineer who introduced
  this bug would have noticed the extra time that is spent early in boot.

Pavel Tatashin (6):
  x86/tsc: remove tsc_disabled flag
  time: sync read_boot_clock64() with persistent clock
  x86/time: read_boot_clock64() implementation
  sched: early boot clock
  x86/paravirt: add active_sched_clock to pv_time_ops
  x86/tsc: use tsc early

 arch/arm/kernel/time.c                |  2 +-
 arch/s390/kernel/time.c               |  2 +-
 arch/x86/include/asm/paravirt.h       |  2 +-
 arch/x86/include/asm/paravirt_types.h |  1 +
 arch/x86/include/asm/tsc.h            |  4 ++
 arch/x86/kernel/paravirt.c            |  1 +
 arch/x86/kernel/setup.c               | 10 +++-
 arch/x86/kernel/time.c                | 31 +++++++++++
 arch/x86/kernel/tsc.c                 | 98 ++++++++++++++++++++++++++++++-----
 arch/x86/xen/time.c                   |  7 +--
 include/linux/timekeeping.h           | 10 ++--
 kernel/sched/clock.c                  | 10 +++-
 kernel/time/timekeeping.c             |  8 ++-
 13 files changed, 155 insertions(+), 31 deletions(-)

-- 
2.15.0

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v7 1/6] x86/tsc: remove tsc_disabled flag
  2017-11-02 17:26 [PATCH v7 0/6] Early boot time stamps for x86 Pavel Tatashin
@ 2017-11-02 17:26 ` Pavel Tatashin
  2017-11-03  1:58   ` Dou Liyang
  2017-11-02 17:26 ` [PATCH v7 2/6] time: sync read_boot_clock64() with persistent clock Pavel Tatashin
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 15+ messages in thread
From: Pavel Tatashin @ 2017-11-02 17:26 UTC (permalink / raw)
  To: steven.sistare, daniel.m.jordan, linux, schwidefsky,
	heiko.carstens, john.stultz, sboyd, x86, linux-kernel, mingo,
	tglx, hpa, douly.fnst

tsc_disabled is set when notsc is passed as kernel parameter. The reason we
have notsc is to avoid timing problems on multi-preccors systems. However,
we already have a mechanism to detect and resolve these issues by invoking
tsc unstable path.

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
---
 arch/x86/kernel/tsc.c | 17 +++--------------
 1 file changed, 3 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index 796d96bb0821..1c4502a2b7b2 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -37,11 +37,6 @@ EXPORT_SYMBOL(tsc_khz);
  */
 static int __read_mostly tsc_unstable;
 
-/* native_sched_clock() is called before tsc_init(), so
-   we must start with the TSC soft disabled to prevent
-   erroneous rdtsc usage on !boot_cpu_has(X86_FEATURE_TSC) processors */
-static int __read_mostly tsc_disabled = -1;
-
 static DEFINE_STATIC_KEY_FALSE(__use_tsc);
 
 int tsc_clocksource_reliable;
@@ -248,7 +243,7 @@ EXPORT_SYMBOL_GPL(check_tsc_unstable);
 int __init notsc_setup(char *str)
 {
 	pr_warn("Kernel compiled with CONFIG_X86_TSC, cannot disable TSC completely\n");
-	tsc_disabled = 1;
+	mark_tsc_unstable("boot parameter notsc");
 	return 1;
 }
 #else
@@ -1229,7 +1224,7 @@ static void tsc_refine_calibration_work(struct work_struct *work)
 
 static int __init init_tsc_clocksource(void)
 {
-	if (!boot_cpu_has(X86_FEATURE_TSC) || tsc_disabled > 0 || !tsc_khz)
+	if (!boot_cpu_has(X86_FEATURE_TSC) || !tsc_khz)
 		return 0;
 
 	if (tsc_clocksource_reliable)
@@ -1311,12 +1306,6 @@ void __init tsc_init(void)
 		set_cyc2ns_scale(tsc_khz, cpu, cyc);
 	}
 
-	if (tsc_disabled > 0)
-		return;
-
-	/* now allow native_sched_clock() to use rdtsc */
-
-	tsc_disabled = 0;
 	static_branch_enable(&__use_tsc);
 
 	if (!no_sched_irq_time)
@@ -1348,7 +1337,7 @@ unsigned long calibrate_delay_is_known(void)
 	int sibling, cpu = smp_processor_id();
 	struct cpumask *mask = topology_core_cpumask(cpu);
 
-	if (!tsc_disabled && !cpu_has(&cpu_data(cpu), X86_FEATURE_CONSTANT_TSC))
+	if (!cpu_has(&cpu_data(cpu), X86_FEATURE_CONSTANT_TSC))
 		return 0;
 
 	if (!mask)
-- 
2.15.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v7 2/6] time: sync read_boot_clock64() with persistent clock
  2017-11-02 17:26 [PATCH v7 0/6] Early boot time stamps for x86 Pavel Tatashin
  2017-11-02 17:26 ` [PATCH v7 1/6] x86/tsc: remove tsc_disabled flag Pavel Tatashin
@ 2017-11-02 17:26 ` Pavel Tatashin
  2017-11-02 17:26 ` [PATCH v7 3/6] x86/time: read_boot_clock64() implementation Pavel Tatashin
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 15+ messages in thread
From: Pavel Tatashin @ 2017-11-02 17:26 UTC (permalink / raw)
  To: steven.sistare, daniel.m.jordan, linux, schwidefsky,
	heiko.carstens, john.stultz, sboyd, x86, linux-kernel, mingo,
	tglx, hpa, douly.fnst

read_boot_clock64() returns a boot start timestamp from epoch. Some arches
may need to access the persistent clock interface in order to calculate the
epoch offset. However, the resolution of the persistent clock might be low.
Therefore, in order to avoid time discrepancies a new argument 'now' is
added to read_boot_clock64() parameters. Arch may decide to use it instead
of accessing persistent clock again.

Also, change read_boot_clock64() to have __init prototype since it is
accessed only during boot.

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
---
 arch/arm/kernel/time.c      |  2 +-
 arch/s390/kernel/time.c     |  2 +-
 include/linux/timekeeping.h | 10 +++++-----
 kernel/time/timekeeping.c   |  8 ++++++--
 4 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/arch/arm/kernel/time.c b/arch/arm/kernel/time.c
index 629f8e9981f1..5b259261a268 100644
--- a/arch/arm/kernel/time.c
+++ b/arch/arm/kernel/time.c
@@ -90,7 +90,7 @@ void read_persistent_clock64(struct timespec64 *ts)
 	__read_persistent_clock(ts);
 }
 
-void read_boot_clock64(struct timespec64 *ts)
+void __init read_boot_clock64(struct timespec64 *now, struct timespec64 *ts)
 {
 	__read_boot_clock(ts);
 }
diff --git a/arch/s390/kernel/time.c b/arch/s390/kernel/time.c
index 5cbd52169348..780b770e6a89 100644
--- a/arch/s390/kernel/time.c
+++ b/arch/s390/kernel/time.c
@@ -220,7 +220,7 @@ void read_persistent_clock64(struct timespec64 *ts)
 	ext_to_timespec64(clk, ts);
 }
 
-void read_boot_clock64(struct timespec64 *ts)
+void __init read_boot_clock64(struct timespec64 *now, struct timespec64 *ts)
 {
 	unsigned char clk[STORE_CLOCK_EXT_SIZE];
 	__u64 delta;
diff --git a/include/linux/timekeeping.h b/include/linux/timekeeping.h
index ddc229ff6d1e..ffe5705bd064 100644
--- a/include/linux/timekeeping.h
+++ b/include/linux/timekeeping.h
@@ -340,11 +340,11 @@ extern void ktime_get_snapshot(struct system_time_snapshot *systime_snapshot);
  */
 extern int persistent_clock_is_local;
 
-extern void read_persistent_clock(struct timespec *ts);
-extern void read_persistent_clock64(struct timespec64 *ts);
-extern void read_boot_clock64(struct timespec64 *ts);
-extern int update_persistent_clock(struct timespec now);
-extern int update_persistent_clock64(struct timespec64 now);
+void read_persistent_clock(struct timespec *ts);
+void read_persistent_clock64(struct timespec64 *ts);
+void read_boot_clock64(struct timespec64 *now, struct timespec64 *ts);
+int update_persistent_clock(struct timespec now);
+int update_persistent_clock64(struct timespec64 now);
 
 
 #endif
diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 2cafb49aa65e..fc6220a89fcc 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1466,9 +1466,13 @@ void __weak read_persistent_clock64(struct timespec64 *ts64)
  * Function to read the exact time the system has been started.
  * Returns a timespec64 with tv_sec=0 and tv_nsec=0 if unsupported.
  *
+ * Argument 'now' contains time from persistent clock to calculate offset from
+ * epoch. May contain zeros if persist ant clock is not available.
+ *
  *  XXX - Do be sure to remove it once all arches implement it.
  */
-void __weak read_boot_clock64(struct timespec64 *ts)
+void __weak __init read_boot_clock64(struct timespec64 *now,
+				     struct timespec64 *ts)
 {
 	ts->tv_sec = 0;
 	ts->tv_nsec = 0;
@@ -1499,7 +1503,7 @@ void __init timekeeping_init(void)
 	} else if (now.tv_sec || now.tv_nsec)
 		persistent_clock_exists = true;
 
-	read_boot_clock64(&boot);
+	read_boot_clock64(&now, &boot);
 	if (!timespec64_valid_strict(&boot)) {
 		pr_warn("WARNING: Boot clock returned invalid value!\n"
 			"         Check your CMOS/BIOS settings.\n");
-- 
2.15.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v7 3/6] x86/time: read_boot_clock64() implementation
  2017-11-02 17:26 [PATCH v7 0/6] Early boot time stamps for x86 Pavel Tatashin
  2017-11-02 17:26 ` [PATCH v7 1/6] x86/tsc: remove tsc_disabled flag Pavel Tatashin
  2017-11-02 17:26 ` [PATCH v7 2/6] time: sync read_boot_clock64() with persistent clock Pavel Tatashin
@ 2017-11-02 17:26 ` Pavel Tatashin
  2017-11-02 17:26 ` [PATCH v7 4/6] sched: early boot clock Pavel Tatashin
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 15+ messages in thread
From: Pavel Tatashin @ 2017-11-02 17:26 UTC (permalink / raw)
  To: steven.sistare, daniel.m.jordan, linux, schwidefsky,
	heiko.carstens, john.stultz, sboyd, x86, linux-kernel, mingo,
	tglx, hpa, douly.fnst

read_boot_clock64() returns time of when system started. Now, that
early boot clock is available on x86 it is possible to implement x86
specific version of read_boot_clock64() that takes advantage of this new
interface.

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
---
 arch/x86/kernel/time.c | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/arch/x86/kernel/time.c b/arch/x86/kernel/time.c
index e0754cdbad37..3104c5304529 100644
--- a/arch/x86/kernel/time.c
+++ b/arch/x86/kernel/time.c
@@ -14,6 +14,7 @@
 #include <linux/i8253.h>
 #include <linux/time.h>
 #include <linux/export.h>
+#include <linux/sched/clock.h>
 
 #include <asm/vsyscall.h>
 #include <asm/x86_init.h>
@@ -95,3 +96,32 @@ void __init time_init(void)
 {
 	late_time_init = x86_late_time_init;
 }
+
+/*
+ * Called once during to boot to initialize boot time.
+ * This function returns timestamp in timespec format which is sec/nsec from
+ * epoch of when boot started.
+ * We use sched_clock_cpu() that gives us nanoseconds from when this clock has
+ * been started and it happens quiet early during boot process. To calculate
+ * offset from epoch we use information provided in 'now' by the caller
+ *
+ * If sched_clock_cpu() is not available or if there is any kind of error
+ * i.e. time from epoch is smaller than boot time, we must return zeros in ts,
+ * and the caller will take care of the error: by assuming that the time when
+ * this function was called is the beginning of boot time.
+ */
+void __init read_boot_clock64(struct timespec64 *now, struct timespec64 *ts)
+{
+	u64 ns_boot = sched_clock_cpu(smp_processor_id());
+	bool valid_clock;
+	u64 ns_now;
+
+	ns_now = timespec64_to_ns(now);
+	valid_clock = ns_boot && timespec64_valid_strict(now) &&
+			(ns_now > ns_boot);
+
+	if (!valid_clock)
+		*ts = (struct timespec64){0, 0};
+	else
+		*ts = ns_to_timespec64(ns_now - ns_boot);
+}
-- 
2.15.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v7 4/6] sched: early boot clock
  2017-11-02 17:26 [PATCH v7 0/6] Early boot time stamps for x86 Pavel Tatashin
                   ` (2 preceding siblings ...)
  2017-11-02 17:26 ` [PATCH v7 3/6] x86/time: read_boot_clock64() implementation Pavel Tatashin
@ 2017-11-02 17:26 ` Pavel Tatashin
  2017-11-02 17:26 ` [PATCH v7 5/6] x86/paravirt: add active_sched_clock to pv_time_ops Pavel Tatashin
  2017-11-02 17:26 ` [PATCH v7 6/6] x86/tsc: use tsc early Pavel Tatashin
  5 siblings, 0 replies; 15+ messages in thread
From: Pavel Tatashin @ 2017-11-02 17:26 UTC (permalink / raw)
  To: steven.sistare, daniel.m.jordan, linux, schwidefsky,
	heiko.carstens, john.stultz, sboyd, x86, linux-kernel, mingo,
	tglx, hpa, douly.fnst

Allow sched_clock() to be used before schec_clock_init() and
sched_clock_init_late() are called. This provides us with a way to get
early boot timestamps on machines with unstable clocks.

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
---
 kernel/sched/clock.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
index ca0f8fc945c6..b86cc946ea19 100644
--- a/kernel/sched/clock.c
+++ b/kernel/sched/clock.c
@@ -217,6 +217,11 @@ void clear_sched_clock_stable(void)
  */
 static int __init sched_clock_init_late(void)
 {
+	/* Transition to unstable clock from early clock */
+	local_irq_disable();
+	__gtod_offset = sched_clock() + __sched_clock_offset - ktime_get_ns();
+	local_irq_enable();
+
 	sched_clock_running = 2;
 	/*
 	 * Ensure that it is impossible to not do a static_key update.
@@ -362,8 +367,9 @@ u64 sched_clock_cpu(int cpu)
 	if (sched_clock_stable())
 		return sched_clock() + __sched_clock_offset;
 
-	if (unlikely(!sched_clock_running))
-		return 0ull;
+	/* Use early clock until sched_clock_init_late() */
+	if (unlikely(sched_clock_running < 2))
+		return sched_clock() + __sched_clock_offset;
 
 	preempt_disable_notrace();
 	scd = cpu_sdc(cpu);
-- 
2.15.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v7 5/6] x86/paravirt: add active_sched_clock to pv_time_ops
  2017-11-02 17:26 [PATCH v7 0/6] Early boot time stamps for x86 Pavel Tatashin
                   ` (3 preceding siblings ...)
  2017-11-02 17:26 ` [PATCH v7 4/6] sched: early boot clock Pavel Tatashin
@ 2017-11-02 17:26 ` Pavel Tatashin
  2017-11-02 17:26 ` [PATCH v7 6/6] x86/tsc: use tsc early Pavel Tatashin
  5 siblings, 0 replies; 15+ messages in thread
From: Pavel Tatashin @ 2017-11-02 17:26 UTC (permalink / raw)
  To: steven.sistare, daniel.m.jordan, linux, schwidefsky,
	heiko.carstens, john.stultz, sboyd, x86, linux-kernel, mingo,
	tglx, hpa, douly.fnst

Early boot clock might differ from the clock that is used later on,
therefore add a new field to pv_time_ops, that shows currently active
clock. If platform supports early boot clock, this field will be changed
to use that clock early in boot, and later will be replaced with the
permanent clock.

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
---
 arch/x86/include/asm/paravirt.h       | 2 +-
 arch/x86/include/asm/paravirt_types.h | 1 +
 arch/x86/kernel/paravirt.c            | 1 +
 arch/x86/xen/time.c                   | 7 ++++---
 4 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 12deec722cf0..f624c9636003 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -171,7 +171,7 @@ static inline int rdmsrl_safe(unsigned msr, unsigned long long *p)
 
 static inline unsigned long long paravirt_sched_clock(void)
 {
-	return PVOP_CALL0(unsigned long long, pv_time_ops.sched_clock);
+	return PVOP_CALL0(unsigned long long, pv_time_ops.active_sched_clock);
 }
 
 struct static_key;
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 280d94c36dad..afbda404c1f7 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -97,6 +97,7 @@ struct pv_lazy_ops {
 struct pv_time_ops {
 	unsigned long long (*sched_clock)(void);
 	unsigned long long (*steal_clock)(int cpu);
+	unsigned long long (*active_sched_clock)(void);
 } __no_randomize_layout;
 
 struct pv_cpu_ops {
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 19a3e8f961c7..895c7c0e9c2e 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -310,6 +310,7 @@ struct pv_init_ops pv_init_ops = {
 struct pv_time_ops pv_time_ops = {
 	.sched_clock = native_sched_clock,
 	.steal_clock = native_steal_clock,
+	.active_sched_clock = native_sched_clock,
 };
 
 __visible struct pv_irq_ops pv_irq_ops = {
diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
index 1ecb05db3632..6a77038e23f5 100644
--- a/arch/x86/xen/time.c
+++ b/arch/x86/xen/time.c
@@ -407,8 +407,8 @@ static void __init xen_time_init(void)
 
 void __ref xen_init_time_ops(void)
 {
-	pv_time_ops = xen_time_ops;
-
+	pv_time_ops.sched_clock = xen_time_ops.sched_clock;
+	pv_time_ops.steal_clock = xen_time_ops.steal_clock;
 	x86_init.timers.timer_init = xen_time_init;
 	x86_init.timers.setup_percpu_clockev = x86_init_noop;
 	x86_cpuinit.setup_percpu_clockev = x86_init_noop;
@@ -449,7 +449,8 @@ void __init xen_hvm_init_time_ops(void)
 		return;
 	}
 
-	pv_time_ops = xen_time_ops;
+	pv_time_ops.sched_clock = xen_time_ops.sched_clock;
+	pv_time_ops.steal_clock = xen_time_ops.steal_clock;
 	x86_init.timers.setup_percpu_clockev = xen_time_init;
 	x86_cpuinit.setup_percpu_clockev = xen_hvm_setup_cpu_clockevents;
 
-- 
2.15.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v7 6/6] x86/tsc: use tsc early
  2017-11-02 17:26 [PATCH v7 0/6] Early boot time stamps for x86 Pavel Tatashin
                   ` (4 preceding siblings ...)
  2017-11-02 17:26 ` [PATCH v7 5/6] x86/paravirt: add active_sched_clock to pv_time_ops Pavel Tatashin
@ 2017-11-02 17:26 ` Pavel Tatashin
  2017-11-03  2:54   ` Dou Liyang
  2017-11-08  9:17   ` Dou Liyang
  5 siblings, 2 replies; 15+ messages in thread
From: Pavel Tatashin @ 2017-11-02 17:26 UTC (permalink / raw)
  To: steven.sistare, daniel.m.jordan, linux, schwidefsky,
	heiko.carstens, john.stultz, sboyd, x86, linux-kernel, mingo,
	tglx, hpa, douly.fnst

tsc_early_init():
Determines offset, shift and multiplier for the early clock based on the
TSC frequency.

tsc_early_fini()
Implement the finish part of early tsc feature, prints message about the
offset, which can be useful to find out how much time was spent in post and
boot manager (if TSC starts from 0 during boot)

sched_clock_early():
TSC based implementation of early clock.

Call tsc_early_init() to initialize early boot time stamps functionality on
the supported x86 platforms, and call tsc_early_fini() to finish this
feature after permanent clock has been initialized.

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
---
 arch/x86/include/asm/tsc.h |  4 +++
 arch/x86/kernel/setup.c    | 10 ++++--
 arch/x86/kernel/time.c     |  1 +
 arch/x86/kernel/tsc.c      | 81 ++++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 94 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/tsc.h b/arch/x86/include/asm/tsc.h
index f5e6f1c417df..6dc9618b24e3 100644
--- a/arch/x86/include/asm/tsc.h
+++ b/arch/x86/include/asm/tsc.h
@@ -50,11 +50,15 @@ extern bool tsc_store_and_check_tsc_adjust(bool bootcpu);
 extern void tsc_verify_tsc_adjust(bool resume);
 extern void check_tsc_sync_source(int cpu);
 extern void check_tsc_sync_target(void);
+void tsc_early_init(unsigned int khz);
+void tsc_early_fini(void);
 #else
 static inline bool tsc_store_and_check_tsc_adjust(bool bootcpu) { return false; }
 static inline void tsc_verify_tsc_adjust(bool resume) { }
 static inline void check_tsc_sync_source(int cpu) { }
 static inline void check_tsc_sync_target(void) { }
+static inline void tsc_early_init(unsigned int khz) { }
+static inline void tsc_early_fini(void) { }
 #endif
 
 extern int notsc_setup(char *);
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 0957dd73d127..3df8be642b80 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -822,7 +822,11 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p)
 	return 0;
 }
 
-static void __init simple_udelay_calibration(void)
+/*
+ * Initialize early tsc to show early boot timestamps, and also loops_per_jiffy
+ * for udelay
+ */
+static void __init early_clock_calibration(void)
 {
 	unsigned int tsc_khz, cpu_khz;
 	unsigned long lpj;
@@ -837,6 +841,8 @@ static void __init simple_udelay_calibration(void)
 	if (!tsc_khz)
 		return;
 
+	tsc_early_init(tsc_khz);
+
 	lpj = tsc_khz * 1000;
 	do_div(lpj, HZ);
 	loops_per_jiffy = lpj;
@@ -1049,7 +1055,7 @@ void __init setup_arch(char **cmdline_p)
 	 */
 	init_hypervisor_platform();
 
-	simple_udelay_calibration();
+	early_clock_calibration();
 
 	x86_init.resources.probe_roms();
 
diff --git a/arch/x86/kernel/time.c b/arch/x86/kernel/time.c
index 3104c5304529..838c5980cae4 100644
--- a/arch/x86/kernel/time.c
+++ b/arch/x86/kernel/time.c
@@ -86,6 +86,7 @@ static __init void x86_late_time_init(void)
 {
 	x86_init.timers.timer_init();
 	tsc_init();
+	tsc_early_fini();
 }
 
 /*
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index 1c4502a2b7b2..edacd0aa55f5 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -181,6 +181,80 @@ static void set_cyc2ns_scale(unsigned long khz, int cpu, unsigned long long tsc_
 	local_irq_restore(flags);
 }
 
+#ifdef CONFIG_X86_TSC
+static struct cyc2ns_data  cyc2ns_early;
+
+static u64 sched_clock_early(void)
+{
+	u64 ns = mul_u64_u32_shr(rdtsc(), cyc2ns_early.cyc2ns_mul,
+				 cyc2ns_early.cyc2ns_shift);
+	return ns + cyc2ns_early.cyc2ns_offset;
+}
+
+#ifdef CONFIG_PARAVIRT
+static inline void __init tsc_early_enable(void)
+{
+	pv_time_ops.active_sched_clock = sched_clock_early;
+}
+
+static inline void __init tsc_early_disable(void)
+{
+	pv_time_ops.active_sched_clock = pv_time_ops.sched_clock;
+}
+#else /* CONFIG_PARAVIRT */
+/*
+ * For native clock we use two switches static and dynamic, the static switch is
+ * initially true, so we check the dynamic switch, which is initially false.
+ * Later  when early clock is disabled, we can alter the static switch in order
+ * to avoid branch check on every sched_clock() call.
+ */
+static bool __tsc_early;
+static DEFINE_STATIC_KEY_TRUE(__tsc_early_static);
+
+static inline void __init tsc_early_enable(void)
+{
+	__tsc_early = true;
+}
+
+static inline void __init tsc_early_disable(void)
+{
+	__tsc_early = false;
+	static_branch_disable(&__tsc_early_static);
+}
+#endif /* CONFIG_PARAVIRT */
+
+/*
+ * Initialize clock for early time stamps
+ */
+void __init tsc_early_init(unsigned int khz)
+{
+	clocks_calc_mult_shift(&cyc2ns_early.cyc2ns_mul,
+			       &cyc2ns_early.cyc2ns_shift,
+			       khz, NSEC_PER_MSEC, 0);
+	cyc2ns_early.cyc2ns_offset = -sched_clock_early();
+	tsc_early_enable();
+}
+
+void __init tsc_early_fini(void)
+{
+	unsigned long long t;
+	unsigned long r;
+
+	/* We did not have early sched clock if multiplier is 0 */
+	if (cyc2ns_early.cyc2ns_mul == 0) {
+		tsc_early_disable();
+		return;
+	}
+
+	t = -cyc2ns_early.cyc2ns_offset;
+	r = do_div(t, NSEC_PER_SEC);
+
+	tsc_early_disable();
+	__sched_clock_offset = sched_clock_early() - sched_clock();
+	pr_info("sched clock early is finished, offset [%lld.%09lds]\n", t, r);
+}
+#endif /* CONFIG_X86_TSC */
+
 /*
  * Scheduler clock - returns current time in nanosec units.
  */
@@ -193,6 +267,13 @@ u64 native_sched_clock(void)
 		return cycles_2_ns(tsc_now);
 	}
 
+#if !defined(CONFIG_PARAVIRT) && defined(CONFIG_X86_TSC)
+	if (static_branch_unlikely(&__tsc_early_static)) {
+		if (__tsc_early)
+			return sched_clock_early();
+	}
+#endif /* !CONFIG_PARAVIRT && CONFIG_X86_TSC */
+
 	/*
 	 * Fall back to jiffies if there's no TSC available:
 	 * ( But note that we still use it if the TSC is marked
-- 
2.15.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 1/6] x86/tsc: remove tsc_disabled flag
  2017-11-02 17:26 ` [PATCH v7 1/6] x86/tsc: remove tsc_disabled flag Pavel Tatashin
@ 2017-11-03  1:58   ` Dou Liyang
  2017-11-03 14:23     ` Pavel Tatashin
  0 siblings, 1 reply; 15+ messages in thread
From: Dou Liyang @ 2017-11-03  1:58 UTC (permalink / raw)
  To: Pavel Tatashin, steven.sistare, daniel.m.jordan, linux,
	schwidefsky, heiko.carstens, john.stultz, sboyd, x86,
	linux-kernel, mingo, tglx, hpa

Hi Pavel,

At 11/03/2017 01:26 AM, Pavel Tatashin wrote:
> tsc_disabled is set when notsc is passed as kernel parameter. The reason we
> have notsc is to avoid timing problems on multi-preccors systems. However,
> we already have a mechanism to detect and resolve these issues by invoking
> tsc unstable path.
>
> Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
> ---
>  arch/x86/kernel/tsc.c | 17 +++--------------
>  1 file changed, 3 insertions(+), 14 deletions(-)
>
> diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
> index 796d96bb0821..1c4502a2b7b2 100644
> --- a/arch/x86/kernel/tsc.c
> +++ b/arch/x86/kernel/tsc.c
> @@ -37,11 +37,6 @@ EXPORT_SYMBOL(tsc_khz);
>   */
>  static int __read_mostly tsc_unstable;
>
> -/* native_sched_clock() is called before tsc_init(), so
> -   we must start with the TSC soft disabled to prevent
> -   erroneous rdtsc usage on !boot_cpu_has(X86_FEATURE_TSC) processors */
> -static int __read_mostly tsc_disabled = -1;
> -
>  static DEFINE_STATIC_KEY_FALSE(__use_tsc);
>
>  int tsc_clocksource_reliable;
> @@ -248,7 +243,7 @@ EXPORT_SYMBOL_GPL(check_tsc_unstable);
>  int __init notsc_setup(char *str)
>  {
>  	pr_warn("Kernel compiled with CONFIG_X86_TSC, cannot disable TSC completely\n");

IMO, this warning may make users confused, could we remove it from here?

Thanks,
	dou.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 6/6] x86/tsc: use tsc early
  2017-11-02 17:26 ` [PATCH v7 6/6] x86/tsc: use tsc early Pavel Tatashin
@ 2017-11-03  2:54   ` Dou Liyang
  2017-11-03 14:30     ` Pavel Tatashin
  2017-11-08  9:17   ` Dou Liyang
  1 sibling, 1 reply; 15+ messages in thread
From: Dou Liyang @ 2017-11-03  2:54 UTC (permalink / raw)
  To: Pavel Tatashin, steven.sistare, daniel.m.jordan, linux,
	schwidefsky, heiko.carstens, john.stultz, sboyd, x86,
	linux-kernel, mingo, tglx, hpa

Hi Pavel,

At 11/03/2017 01:26 AM, Pavel Tatashin wrote:
> tsc_early_init():
> Determines offset, shift and multiplier for the early clock based on the
> TSC frequency.
>
> tsc_early_fini()
> Implement the finish part of early tsc feature, prints message about the
> offset, which can be useful to find out how much time was spent in post and
> boot manager (if TSC starts from 0 during boot)
>
> sched_clock_early():
> TSC based implementation of early clock.
>
> Call tsc_early_init() to initialize early boot time stamps functionality on
> the supported x86 platforms, and call tsc_early_fini() to finish this
> feature after permanent clock has been initialized.
>
> Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
> ---
>  arch/x86/include/asm/tsc.h |  4 +++
>  arch/x86/kernel/setup.c    | 10 ++++--
>  arch/x86/kernel/time.c     |  1 +
>  arch/x86/kernel/tsc.c      | 81 ++++++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 94 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/tsc.h b/arch/x86/include/asm/tsc.h
> index f5e6f1c417df..6dc9618b24e3 100644
> --- a/arch/x86/include/asm/tsc.h
> +++ b/arch/x86/include/asm/tsc.h
> @@ -50,11 +50,15 @@ extern bool tsc_store_and_check_tsc_adjust(bool bootcpu);
>  extern void tsc_verify_tsc_adjust(bool resume);
>  extern void check_tsc_sync_source(int cpu);
>  extern void check_tsc_sync_target(void);
> +void tsc_early_init(unsigned int khz);
> +void tsc_early_fini(void);
>  #else
>  static inline bool tsc_store_and_check_tsc_adjust(bool bootcpu) { return false; }
>  static inline void tsc_verify_tsc_adjust(bool resume) { }
>  static inline void check_tsc_sync_source(int cpu) { }
>  static inline void check_tsc_sync_target(void) { }
> +static inline void tsc_early_init(unsigned int khz) { }
> +static inline void tsc_early_fini(void) { }
>  #endif
>
>  extern int notsc_setup(char *);
> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> index 0957dd73d127..3df8be642b80 100644
> --- a/arch/x86/kernel/setup.c
> +++ b/arch/x86/kernel/setup.c
> @@ -822,7 +822,11 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p)
>  	return 0;
>  }
>
> -static void __init simple_udelay_calibration(void)
> +/*
> + * Initialize early tsc to show early boot timestamps, and also loops_per_jiffy
> + * for udelay
> + */
> +static void __init early_clock_calibration(void)

Commit:

eb496063c990 ("x86/timers: Move the simple udelay calibration to tsc.h")

moves this function to tsc.h and renames it in tip tree[1].

I guess using this commit, we can simplify this patch.

As this series, except the 2rd patch, is worked for x86, how about let
this series be based on tip.

BTW, I am testing your patches, will give you result later.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git

Thanks,
	dou

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 1/6] x86/tsc: remove tsc_disabled flag
  2017-11-03  1:58   ` Dou Liyang
@ 2017-11-03 14:23     ` Pavel Tatashin
  2017-11-08  8:52       ` Dou Liyang
  0 siblings, 1 reply; 15+ messages in thread
From: Pavel Tatashin @ 2017-11-03 14:23 UTC (permalink / raw)
  To: Dou Liyang
  Cc: Steve Sistare, Daniel Jordan, linux, schwidefsky, heiko.carstens,
	John Stultz, sboyd, x86, linux-kernel, mingo, Thomas Gleixner,
	hpa

Hi Dou,

Sure, I can remove the warning, but I think we should print something
that is indicating that notsc is not a good parameter anymore: i.e
tsc=unstable is better. Perhaps something like:
"Kernel parameter \'notsc\'  is deprecated, please use
\'tsc=unstable\' instead" ?

Pasha

On Thu, Nov 2, 2017 at 9:58 PM, Dou Liyang <douly.fnst@cn.fujitsu.com> wrote:
> Hi Pavel,
>
>
> At 11/03/2017 01:26 AM, Pavel Tatashin wrote:
>>
>> tsc_disabled is set when notsc is passed as kernel parameter. The reason
>> we
>> have notsc is to avoid timing problems on multi-preccors systems. However,
>> we already have a mechanism to detect and resolve these issues by invoking
>> tsc unstable path.
>>
>> Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
>> ---
>>  arch/x86/kernel/tsc.c | 17 +++--------------
>>  1 file changed, 3 insertions(+), 14 deletions(-)
>>
>> diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
>> index 796d96bb0821..1c4502a2b7b2 100644
>> --- a/arch/x86/kernel/tsc.c
>> +++ b/arch/x86/kernel/tsc.c
>> @@ -37,11 +37,6 @@ EXPORT_SYMBOL(tsc_khz);
>>   */
>>  static int __read_mostly tsc_unstable;
>>
>> -/* native_sched_clock() is called before tsc_init(), so
>> -   we must start with the TSC soft disabled to prevent
>> -   erroneous rdtsc usage on !boot_cpu_has(X86_FEATURE_TSC) processors */
>> -static int __read_mostly tsc_disabled = -1;
>> -
>>  static DEFINE_STATIC_KEY_FALSE(__use_tsc);
>>
>>  int tsc_clocksource_reliable;
>> @@ -248,7 +243,7 @@ EXPORT_SYMBOL_GPL(check_tsc_unstable);
>>  int __init notsc_setup(char *str)
>>  {
>>         pr_warn("Kernel compiled with CONFIG_X86_TSC, cannot disable TSC
>> completely\n");
>
>
> IMO, this warning may make users confused, could we remove it from here?
>
> Thanks,
>         dou.
>
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 6/6] x86/tsc: use tsc early
  2017-11-03  2:54   ` Dou Liyang
@ 2017-11-03 14:30     ` Pavel Tatashin
  0 siblings, 0 replies; 15+ messages in thread
From: Pavel Tatashin @ 2017-11-03 14:30 UTC (permalink / raw)
  To: Dou Liyang
  Cc: Steve Sistare, Daniel Jordan, linux, schwidefsky, heiko.carstens,
	John Stultz, sboyd, x86, linux-kernel, mingo, Thomas Gleixner,
	hpa

Hi Dou,

Thank you for testing it! I will rebase this series of the 'tip' tree
for the next iteration.

Thank you,
Pasha

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 1/6] x86/tsc: remove tsc_disabled flag
  2017-11-03 14:23     ` Pavel Tatashin
@ 2017-11-08  8:52       ` Dou Liyang
  2017-11-08 17:23         ` Pavel Tatashin
  0 siblings, 1 reply; 15+ messages in thread
From: Dou Liyang @ 2017-11-08  8:52 UTC (permalink / raw)
  To: Pavel Tatashin
  Cc: Steve Sistare, Daniel Jordan, linux, schwidefsky, heiko.carstens,
	John Stultz, sboyd, x86, linux-kernel, mingo, Thomas Gleixner,
	hpa

Hi Pavel,

At 11/03/2017 10:23 PM, Pavel Tatashin wrote:
> Hi Dou,
>
> Sure, I can remove the warning, but I think we should print something
> that is indicating that notsc is not a good parameter anymore: i.e
> tsc=unstable is better. Perhaps something like:
> "Kernel parameter \'notsc\'  is deprecated, please use
> \'tsc=unstable\' instead" ?
>

IMO, we already have a message by

mark_tsc_unstable("boot parameter notsc");

and we will use 'notsc' in case of CONFIG_X86_TSC = no

So, I guess there is no need to print this msg.

Thanks,
	dou.

> Pasha

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 6/6] x86/tsc: use tsc early
  2017-11-02 17:26 ` [PATCH v7 6/6] x86/tsc: use tsc early Pavel Tatashin
  2017-11-03  2:54   ` Dou Liyang
@ 2017-11-08  9:17   ` Dou Liyang
  2017-11-08 17:24     ` Pavel Tatashin
  1 sibling, 1 reply; 15+ messages in thread
From: Dou Liyang @ 2017-11-08  9:17 UTC (permalink / raw)
  To: Pavel Tatashin, steven.sistare, daniel.m.jordan, linux,
	schwidefsky, heiko.carstens, john.stultz, sboyd, x86,
	linux-kernel, mingo, tglx, hpa

Hi Pavel,

Sorry to reply so late.

I have tested it based on tip tree. it is OK for me.

some concerns below.

At 11/03/2017 01:26 AM, Pavel Tatashin wrote:
> tsc_early_init():
> Determines offset, shift and multiplier for the early clock based on the
> TSC frequency.
>
> tsc_early_fini()
> Implement the finish part of early tsc feature, prints message about the
> offset, which can be useful to find out how much time was spent in post and
> boot manager (if TSC starts from 0 during boot)
>
> sched_clock_early():
> TSC based implementation of early clock.
>
> Call tsc_early_init() to initialize early boot time stamps functionality on
> the supported x86 platforms, and call tsc_early_fini() to finish this
> feature after permanent clock has been initialized.
>
> Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
> ---
>  arch/x86/include/asm/tsc.h |  4 +++
>  arch/x86/kernel/setup.c    | 10 ++++--
>  arch/x86/kernel/time.c     |  1 +
>  arch/x86/kernel/tsc.c      | 81 ++++++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 94 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/tsc.h b/arch/x86/include/asm/tsc.h
> index f5e6f1c417df..6dc9618b24e3 100644
> --- a/arch/x86/include/asm/tsc.h
> +++ b/arch/x86/include/asm/tsc.h
> @@ -50,11 +50,15 @@ extern bool tsc_store_and_check_tsc_adjust(bool bootcpu);
>  extern void tsc_verify_tsc_adjust(bool resume);
>  extern void check_tsc_sync_source(int cpu);
>  extern void check_tsc_sync_target(void);
> +void tsc_early_init(unsigned int khz);
> +void tsc_early_fini(void);
>  #else
>  static inline bool tsc_store_and_check_tsc_adjust(bool bootcpu) { return false; }
>  static inline void tsc_verify_tsc_adjust(bool resume) { }
>  static inline void check_tsc_sync_source(int cpu) { }
>  static inline void check_tsc_sync_target(void) { }
> +static inline void tsc_early_init(unsigned int khz) { }
> +static inline void tsc_early_fini(void) { }
>  #endif
>
>  extern int notsc_setup(char *);
> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> index 0957dd73d127..3df8be642b80 100644
> --- a/arch/x86/kernel/setup.c
> +++ b/arch/x86/kernel/setup.c
> @@ -822,7 +822,11 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p)
>  	return 0;
>  }
>
> -static void __init simple_udelay_calibration(void)
> +/*
> + * Initialize early tsc to show early boot timestamps, and also loops_per_jiffy
> + * for udelay
> + */
> +static void __init early_clock_calibration(void)
>  {
>  	unsigned int tsc_khz, cpu_khz;
>  	unsigned long lpj;
> @@ -837,6 +841,8 @@ static void __init simple_udelay_calibration(void)
>  	if (!tsc_khz)
>  		return;
>
> +	tsc_early_init(tsc_khz);
> +
>  	lpj = tsc_khz * 1000;
>  	do_div(lpj, HZ);
>  	loops_per_jiffy = lpj;
> @@ -1049,7 +1055,7 @@ void __init setup_arch(char **cmdline_p)
>  	 */
>  	init_hypervisor_platform();
>
> -	simple_udelay_calibration();
> +	early_clock_calibration();
>
>  	x86_init.resources.probe_roms();
>
> diff --git a/arch/x86/kernel/time.c b/arch/x86/kernel/time.c
> index 3104c5304529..838c5980cae4 100644
> --- a/arch/x86/kernel/time.c
> +++ b/arch/x86/kernel/time.c
> @@ -86,6 +86,7 @@ static __init void x86_late_time_init(void)
>  {
>  	x86_init.timers.timer_init();
>  	tsc_init();
> +	tsc_early_fini();

Can we put this into tsc_init(), So we can remove the definitions in
tsc.h

>  }
>
>  /*
> diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
> index 1c4502a2b7b2..edacd0aa55f5 100644
> --- a/arch/x86/kernel/tsc.c
> +++ b/arch/x86/kernel/tsc.c
> @@ -181,6 +181,80 @@ static void set_cyc2ns_scale(unsigned long khz, int cpu, unsigned long long tsc_
>  	local_irq_restore(flags);
>  }
>
> +#ifdef CONFIG_X86_TSC
> +static struct cyc2ns_data  cyc2ns_early;
> +
> +static u64 sched_clock_early(void)

This function is only called during boot time. Should it
be a "__init" function too?


Thanks
	dou.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 1/6] x86/tsc: remove tsc_disabled flag
  2017-11-08  8:52       ` Dou Liyang
@ 2017-11-08 17:23         ` Pavel Tatashin
  0 siblings, 0 replies; 15+ messages in thread
From: Pavel Tatashin @ 2017-11-08 17:23 UTC (permalink / raw)
  To: Dou Liyang
  Cc: Steve Sistare, Daniel Jordan, linux, schwidefsky, heiko.carstens,
	John Stultz, sboyd, x86, linux-kernel, mingo, Thomas Gleixner,
	hpa

>
> IMO, we already have a message by
>
> mark_tsc_unstable("boot parameter notsc");
>
> and we will use 'notsc' in case of CONFIG_X86_TSC = no
>
> So, I guess there is no need to print this msg.
>

OK, removed the warning.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 6/6] x86/tsc: use tsc early
  2017-11-08  9:17   ` Dou Liyang
@ 2017-11-08 17:24     ` Pavel Tatashin
  0 siblings, 0 replies; 15+ messages in thread
From: Pavel Tatashin @ 2017-11-08 17:24 UTC (permalink / raw)
  To: Dou Liyang
  Cc: Steve Sistare, Daniel Jordan, linux, schwidefsky, heiko.carstens,
	John Stultz, sboyd, x86, linux-kernel, mingo, Thomas Gleixner,
	hpa

Hi Dou,

> I have tested it based on tip tree. it is OK for me.

Execllent, Thank you very much for spending time testing this project.


>>         x86_init.timers.timer_init();
>>         tsc_init();
>> +       tsc_early_fini();
>
>
> Can we put this into tsc_init(), So we can remove the definitions in
> tsc.h

Sure, done.

>> +static u64 sched_clock_early(void)
>
>
> This function is only called during boot time. Should it
> be a "__init" function too?

While it is guranteed that this function is never going to be called
once system is booted, and we indeed can unload it. I do not think
this is possible, because this function is called from sched_clock(),
which is not part of __init section. Is there a way to do it and not
to have a warning about section missmatch?

I will send out new patches with Dou's comments addressed soon.

Thank you,
Pavel

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2017-11-08 17:24 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-02 17:26 [PATCH v7 0/6] Early boot time stamps for x86 Pavel Tatashin
2017-11-02 17:26 ` [PATCH v7 1/6] x86/tsc: remove tsc_disabled flag Pavel Tatashin
2017-11-03  1:58   ` Dou Liyang
2017-11-03 14:23     ` Pavel Tatashin
2017-11-08  8:52       ` Dou Liyang
2017-11-08 17:23         ` Pavel Tatashin
2017-11-02 17:26 ` [PATCH v7 2/6] time: sync read_boot_clock64() with persistent clock Pavel Tatashin
2017-11-02 17:26 ` [PATCH v7 3/6] x86/time: read_boot_clock64() implementation Pavel Tatashin
2017-11-02 17:26 ` [PATCH v7 4/6] sched: early boot clock Pavel Tatashin
2017-11-02 17:26 ` [PATCH v7 5/6] x86/paravirt: add active_sched_clock to pv_time_ops Pavel Tatashin
2017-11-02 17:26 ` [PATCH v7 6/6] x86/tsc: use tsc early Pavel Tatashin
2017-11-03  2:54   ` Dou Liyang
2017-11-03 14:30     ` Pavel Tatashin
2017-11-08  9:17   ` Dou Liyang
2017-11-08 17:24     ` Pavel Tatashin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).