linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv3 0/3] 64bit friendly generic sched_clock
@ 2013-06-05 23:54 Stephen Boyd
  2013-06-05 23:54 ` [PATCHv3 1/3] sched_clock: Add support for >32 bit sched_clock Stephen Boyd
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Stephen Boyd @ 2013-06-05 23:54 UTC (permalink / raw)
  To: John Stultz
  Cc: linux-kernel, linux-arm-msm, linux-arm-kernel, Russell King, arm,
	Catalin Marinas, Will Deacon, Thomas Gleixner,
	Christopher Covington

Ok here's the second take at 64 bit support in generic sched_clock.
This assumes that the previous three patches in the v2 of this 
series have been applied.

I've also noticed that we probably need to update the mult/shift
calculation similar to how clocksources are done. Should we
just copy/paste the maxsec calculation code here or do something
else?

Stephen Boyd (3):
  sched_clock: Add support for >32 bit sched_clock
  ARM: arch_timer: Move to generic sched_clock framework
  arm64: Move to generic sched_clock infrastructure

 arch/arm/kernel/arch_timer.c | 14 ++------------
 arch/arm64/Kconfig           |  1 +
 arch/arm64/kernel/time.c     | 11 ++---------
 include/linux/sched_clock.h  |  3 +--
 kernel/time/sched_clock.c    | 46 ++++++++++++++++++++++++++------------------
 5 files changed, 33 insertions(+), 42 deletions(-)

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCHv3 1/3] sched_clock: Add support for >32 bit sched_clock
  2013-06-05 23:54 [PATCHv3 0/3] 64bit friendly generic sched_clock Stephen Boyd
@ 2013-06-05 23:54 ` Stephen Boyd
  2013-06-06  0:38   ` John Stultz
  2013-06-05 23:54 ` [PATCHv3 2/3] ARM: arch_timer: Move to generic sched_clock framework Stephen Boyd
  2013-06-05 23:54 ` [PATCHv3 3/3] arm64: Move to generic sched_clock infrastructure Stephen Boyd
  2 siblings, 1 reply; 7+ messages in thread
From: Stephen Boyd @ 2013-06-05 23:54 UTC (permalink / raw)
  To: John Stultz
  Cc: linux-kernel, linux-arm-msm, linux-arm-kernel, Russell King, arm,
	Catalin Marinas, Will Deacon, Thomas Gleixner,
	Christopher Covington

The ARM architected system counter has at least 56 useable bits.
Add support for counters with more than 32 bits to the generic
sched_clock implementation so we can avoid the complexity of
dealing with wrap-around on these devices while benefiting from
the irqtime accounting and suspend/resume handling that the
generic sched_clock code already has.

All users should switch over to the 64bit read function so we can
deprecate setup_sched_clock() in favor of sched_clock_setup().

Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
---

I've noticed that we probably need to update the mult/shift
calculation similar to how clocksources are done. Should we
just copy/paste the maxsec calculation code here or do something
smarter?

 include/linux/sched_clock.h |  1 +
 kernel/time/sched_clock.c   | 41 +++++++++++++++++++++++++++--------------
 2 files changed, 28 insertions(+), 14 deletions(-)

diff --git a/include/linux/sched_clock.h b/include/linux/sched_clock.h
index fa7922c..81baaef 100644
--- a/include/linux/sched_clock.h
+++ b/include/linux/sched_clock.h
@@ -15,6 +15,7 @@ static inline void sched_clock_postinit(void) { }
 #endif
 
 extern void setup_sched_clock(u32 (*read)(void), int bits, unsigned long rate);
+extern void sched_clock_setup(u64 (*read)(void), int bits, unsigned long rate);
 
 extern unsigned long long (*sched_clock_func)(void);
 
diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c
index aad1ae6..3478b6d 100644
--- a/kernel/time/sched_clock.c
+++ b/kernel/time/sched_clock.c
@@ -14,11 +14,12 @@
 #include <linux/syscore_ops.h>
 #include <linux/timer.h>
 #include <linux/sched_clock.h>
+#include <linux/bitops.h>
 
 struct clock_data {
 	u64 epoch_ns;
-	u32 epoch_cyc;
-	u32 epoch_cyc_copy;
+	u64 epoch_cyc;
+	u64 epoch_cyc_copy;
 	unsigned long rate;
 	u32 mult;
 	u32 shift;
@@ -35,24 +36,31 @@ static struct clock_data cd = {
 	.mult	= NSEC_PER_SEC / HZ,
 };
 
-static u32 __read_mostly sched_clock_mask = 0xffffffff;
+static u64 __read_mostly sched_clock_mask;
 
-static u32 notrace jiffy_sched_clock_read(void)
+static u64 notrace jiffy_sched_clock_read(void)
 {
-	return (u32)(jiffies - INITIAL_JIFFIES);
+	return (u64)(jiffies - INITIAL_JIFFIES);
 }
 
-static u32 __read_mostly (*read_sched_clock)(void) = jiffy_sched_clock_read;
+static u32 __read_mostly (*read_sched_clock_32)(void);
+
+static u64 notrace read_sched_clock_32_wrapper(void)
+{
+	return read_sched_clock_32();
+}
+
+static u64 __read_mostly (*read_sched_clock)(void) = jiffy_sched_clock_read;
 
 static inline u64 notrace cyc_to_ns(u64 cyc, u32 mult, u32 shift)
 {
 	return (cyc * mult) >> shift;
 }
 
-static unsigned long long notrace cyc_to_sched_clock(u32 cyc, u32 mask)
+static unsigned long long notrace cyc_to_sched_clock(u64 cyc, u64 mask)
 {
 	u64 epoch_ns;
-	u32 epoch_cyc;
+	u64 epoch_cyc;
 
 	/*
 	 * Load the epoch_cyc and epoch_ns atomically.  We do this by
@@ -77,7 +85,7 @@ static unsigned long long notrace cyc_to_sched_clock(u32 cyc, u32 mask)
 static void notrace update_sched_clock(void)
 {
 	unsigned long flags;
-	u32 cyc;
+	u64 cyc;
 	u64 ns;
 
 	cyc = read_sched_clock();
@@ -103,7 +111,7 @@ static void sched_clock_poll(unsigned long wrap_ticks)
 	update_sched_clock();
 }
 
-void __init setup_sched_clock(u32 (*read)(void), int bits, unsigned long rate)
+void __init sched_clock_setup(u64 (*read)(void), int bits, unsigned long rate)
 {
 	unsigned long r, w;
 	u64 res, wrap;
@@ -112,10 +120,9 @@ void __init setup_sched_clock(u32 (*read)(void), int bits, unsigned long rate)
 	if (cd.rate > rate)
 		return;
 
-	BUG_ON(bits > 32);
 	WARN_ON(!irqs_disabled());
 	read_sched_clock = read;
-	sched_clock_mask = (1 << bits) - 1;
+	sched_clock_mask = (1ULL << bits) - 1;
 	cd.rate = rate;
 
 	/* calculate the mult/shift to convert counter ticks to ns. */
@@ -160,9 +167,15 @@ void __init setup_sched_clock(u32 (*read)(void), int bits, unsigned long rate)
 	pr_debug("Registered %pF as sched_clock source\n", read);
 }
 
+void __init setup_sched_clock(u32 (*read)(void), int bits, unsigned long rate)
+{
+	read_sched_clock_32 = read;
+	sched_clock_setup(read_sched_clock_32_wrapper, bits, rate);
+}
+
 static unsigned long long notrace sched_clock_32(void)
 {
-	u32 cyc = read_sched_clock();
+	u64 cyc = read_sched_clock();
 	return cyc_to_sched_clock(cyc, sched_clock_mask);
 }
 
@@ -183,7 +196,7 @@ void __init sched_clock_postinit(void)
 	 * make it the final one one.
 	 */
 	if (read_sched_clock == jiffy_sched_clock_read)
-		setup_sched_clock(jiffy_sched_clock_read, 32, HZ);
+		sched_clock_setup(jiffy_sched_clock_read, BITS_PER_LONG, HZ);
 
 	sched_clock_poll(sched_clock_timer.data);
 }
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCHv3 2/3] ARM: arch_timer: Move to generic sched_clock framework
  2013-06-05 23:54 [PATCHv3 0/3] 64bit friendly generic sched_clock Stephen Boyd
  2013-06-05 23:54 ` [PATCHv3 1/3] sched_clock: Add support for >32 bit sched_clock Stephen Boyd
@ 2013-06-05 23:54 ` Stephen Boyd
  2013-06-05 23:54 ` [PATCHv3 3/3] arm64: Move to generic sched_clock infrastructure Stephen Boyd
  2 siblings, 0 replies; 7+ messages in thread
From: Stephen Boyd @ 2013-06-05 23:54 UTC (permalink / raw)
  To: John Stultz
  Cc: linux-kernel, linux-arm-msm, linux-arm-kernel, Russell King, arm,
	Catalin Marinas, Will Deacon, Thomas Gleixner,
	Christopher Covington

Register with the generic sched_clock framework now that it
supports 64 bits. This fixes two problems with the current
sched_clock support for machines using the architected timers.
First off, we don't subtract the start value from subsequent
sched_clock calls so we can potentially start off with
sched_clock returning gigantic numbers. Second, there is no
support for suspend/resume handling so problems such as discussed
in 6a4dae5 (ARM: 7565/1: sched: stop sched_clock() during
suspend, 2012-10-23) can happen without this patch.

Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
---
 arch/arm/kernel/arch_timer.c | 14 ++------------
 include/linux/sched_clock.h  |  2 --
 kernel/time/sched_clock.c    | 13 ++++---------
 3 files changed, 6 insertions(+), 23 deletions(-)

diff --git a/arch/arm/kernel/arch_timer.c b/arch/arm/kernel/arch_timer.c
index 221f07b..2966288 100644
--- a/arch/arm/kernel/arch_timer.c
+++ b/arch/arm/kernel/arch_timer.c
@@ -22,13 +22,6 @@ static unsigned long arch_timer_read_counter_long(void)
 	return arch_timer_read_counter();
 }
 
-static u32 sched_clock_mult __read_mostly;
-
-static unsigned long long notrace arch_timer_sched_clock(void)
-{
-	return arch_timer_read_counter() * sched_clock_mult;
-}
-
 static struct delay_timer arch_delay_timer;
 
 static void __init arch_timer_delay_timer_register(void)
@@ -48,11 +41,8 @@ int __init arch_timer_arch_init(void)
 
 	arch_timer_delay_timer_register();
 
-	/* Cache the sched_clock multiplier to save a divide in the hot path. */
-	sched_clock_mult = NSEC_PER_SEC / arch_timer_rate;
-	sched_clock_func = arch_timer_sched_clock;
-	pr_info("sched_clock: ARM arch timer >56 bits at %ukHz, resolution %uns\n",
-		arch_timer_rate / 1000, sched_clock_mult);
+	/* 56 bits minimum, so we assume worst case rollover */
+	sched_clock_setup(arch_timer_read_counter, 56, arch_timer_rate);
 
 	return 0;
 }
diff --git a/include/linux/sched_clock.h b/include/linux/sched_clock.h
index 81baaef..04cee83 100644
--- a/include/linux/sched_clock.h
+++ b/include/linux/sched_clock.h
@@ -17,6 +17,4 @@ static inline void sched_clock_postinit(void) { }
 extern void setup_sched_clock(u32 (*read)(void), int bits, unsigned long rate);
 extern void sched_clock_setup(u64 (*read)(void), int bits, unsigned long rate);
 
-extern unsigned long long (*sched_clock_func)(void);
-
 #endif
diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c
index 3478b6d..f69addf 100644
--- a/kernel/time/sched_clock.c
+++ b/kernel/time/sched_clock.c
@@ -173,20 +173,15 @@ void __init setup_sched_clock(u32 (*read)(void), int bits, unsigned long rate)
 	sched_clock_setup(read_sched_clock_32_wrapper, bits, rate);
 }
 
-static unsigned long long notrace sched_clock_32(void)
-{
-	u64 cyc = read_sched_clock();
-	return cyc_to_sched_clock(cyc, sched_clock_mask);
-}
-
-unsigned long long __read_mostly (*sched_clock_func)(void) = sched_clock_32;
-
 unsigned long long notrace sched_clock(void)
 {
+	u64 cyc;
+
 	if (cd.suspended)
 		return cd.epoch_ns;
 
-	return sched_clock_func();
+	cyc = read_sched_clock();
+	return cyc_to_sched_clock(cyc, sched_clock_mask);
 }
 
 void __init sched_clock_postinit(void)
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCHv3 3/3] arm64: Move to generic sched_clock infrastructure
  2013-06-05 23:54 [PATCHv3 0/3] 64bit friendly generic sched_clock Stephen Boyd
  2013-06-05 23:54 ` [PATCHv3 1/3] sched_clock: Add support for >32 bit sched_clock Stephen Boyd
  2013-06-05 23:54 ` [PATCHv3 2/3] ARM: arch_timer: Move to generic sched_clock framework Stephen Boyd
@ 2013-06-05 23:54 ` Stephen Boyd
  2013-06-12 18:51   ` Christopher Covington
  2 siblings, 1 reply; 7+ messages in thread
From: Stephen Boyd @ 2013-06-05 23:54 UTC (permalink / raw)
  To: John Stultz
  Cc: linux-kernel, linux-arm-msm, linux-arm-kernel, Russell King, arm,
	Catalin Marinas, Will Deacon, Thomas Gleixner,
	Christopher Covington

Use the generic sched_clock infrastructure instead of rolling our
own. This has the added benefit of fixing suspend/resume as
outlined in 6a4dae5 (ARM: 7565/1: sched: stop sched_clock()
during suspend, 2012-10-23) and correcting the timestamps when
the hardware returns a value instead of 0 upon the first read.

Cc: Christopher Covington <cov@codeaurora.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
---
 arch/arm64/Kconfig       |  1 +
 arch/arm64/kernel/time.c | 11 ++---------
 2 files changed, 3 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 56b3f6d..f9c6e92 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -13,6 +13,7 @@ config ARM64
 	select GENERIC_IOMAP
 	select GENERIC_IRQ_PROBE
 	select GENERIC_IRQ_SHOW
+	select GENERIC_SCHED_CLOCK
 	select GENERIC_SMP_IDLE_THREAD
 	select GENERIC_TIME_VSYSCALL
 	select HARDIRQS_SW_RESEND
diff --git a/arch/arm64/kernel/time.c b/arch/arm64/kernel/time.c
index a551f88..a98eb8b 100644
--- a/arch/arm64/kernel/time.c
+++ b/arch/arm64/kernel/time.c
@@ -33,6 +33,7 @@
 #include <linux/irq.h>
 #include <linux/delay.h>
 #include <linux/clocksource.h>
+#include <linux/sched_clock.h>
 
 #include <clocksource/arm_arch_timer.h>
 
@@ -61,13 +62,6 @@ unsigned long profile_pc(struct pt_regs *regs)
 EXPORT_SYMBOL(profile_pc);
 #endif
 
-static u64 sched_clock_mult __read_mostly;
-
-unsigned long long notrace sched_clock(void)
-{
-	return arch_timer_read_counter() * sched_clock_mult;
-}
-
 int read_current_timer(unsigned long *timer_value)
 {
 	*timer_value = arch_timer_read_counter();
@@ -84,8 +78,7 @@ void __init time_init(void)
 	if (!arch_timer_rate)
 		panic("Unable to initialise architected timer.\n");
 
-	/* Cache the sched_clock multiplier to save a divide in the hot path. */
-	sched_clock_mult = NSEC_PER_SEC / arch_timer_rate;
+	sched_clock_setup(arch_timer_read_counter, 56, arch_timer_rate);
 
 	/* Calibrate the delay loop directly */
 	lpj_fine = arch_timer_rate / HZ;
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCHv3 1/3] sched_clock: Add support for >32 bit sched_clock
  2013-06-05 23:54 ` [PATCHv3 1/3] sched_clock: Add support for >32 bit sched_clock Stephen Boyd
@ 2013-06-06  0:38   ` John Stultz
  2013-06-06  1:43     ` Stephen Boyd
  0 siblings, 1 reply; 7+ messages in thread
From: John Stultz @ 2013-06-06  0:38 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: linux-kernel, linux-arm-msm, linux-arm-kernel, Russell King, arm,
	Catalin Marinas, Will Deacon, Thomas Gleixner,
	Christopher Covington

On 06/05/2013 04:54 PM, Stephen Boyd wrote:
> The ARM architected system counter has at least 56 useable bits.
> Add support for counters with more than 32 bits to the generic
> sched_clock implementation so we can avoid the complexity of
> dealing with wrap-around on these devices while benefiting from
> the irqtime accounting and suspend/resume handling that the
> generic sched_clock code already has.
>
> All users should switch over to the 64bit read function so we can
> deprecate setup_sched_clock() in favor of sched_clock_setup().

Minor nits below.

>
> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
> ---
>
> I've noticed that we probably need to update the mult/shift
> calculation similar to how clocksources are done. Should we
> just copy/paste the maxsec calculation code here or do something
> smarter?

So, the clocksource calculation has an extra variable it has to balance, 
which is the granularity of ntp adjustments being made (since with 
higher shift values, we can make relatively smaller changes by +1 or -1 
from mult).

sched_clock doesn't have to deal with frequency adjustments, so the 
shift value just needs to be high enough to be able to accurately 
express the desired counter frequency.  Too high and you risk 
multiplication overflows if there are large gaps between updates, too 
low though and you run into possible accuracy issues (though I hope 
there isn't much that's using sched_clock for long-term timing where 
slight accuracy issues would be problematic).

So I think its ok if the sched_clock code uses its own logic for 
calculating the mult/shift pair, since the constraints are different 
then what we expect from timekeeping.


>
>   include/linux/sched_clock.h |  1 +
>   kernel/time/sched_clock.c   | 41 +++++++++++++++++++++++++++--------------
>   2 files changed, 28 insertions(+), 14 deletions(-)
>
> diff --git a/include/linux/sched_clock.h b/include/linux/sched_clock.h
> index fa7922c..81baaef 100644
> --- a/include/linux/sched_clock.h
> +++ b/include/linux/sched_clock.h
> @@ -15,6 +15,7 @@ static inline void sched_clock_postinit(void) { }
>   #endif
>   
>   extern void setup_sched_clock(u32 (*read)(void), int bits, unsigned long rate);
> +extern void sched_clock_setup(u64 (*read)(void), int bits, unsigned long rate);

Eww. This sort of word-swizzled function names makes patch reviewing a pain.

I know you're trying to deprecate the old function and provide a smooth 
transition, but would you also consider including follow-on 
patch/patches with this set that converts the existing setup_sched_clock 
usage (at least just the ones in drivers/clocksource?) so it doesn't 
stick around forever?

And if not, at least add a clear comment here, and maybe some build 
warnings to the old function so the driver owners know to make the 
conversion happen quickly.



>   extern unsigned long long (*sched_clock_func)(void);
>   
> diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c
> index aad1ae6..3478b6d 100644
> --- a/kernel/time/sched_clock.c
> +++ b/kernel/time/sched_clock.c
> @@ -14,11 +14,12 @@
>   #include <linux/syscore_ops.h>
>   #include <linux/timer.h>
>   #include <linux/sched_clock.h>
> +#include <linux/bitops.h>
>   
>   struct clock_data {
>   	u64 epoch_ns;
> -	u32 epoch_cyc;
> -	u32 epoch_cyc_copy;
> +	u64 epoch_cyc;
> +	u64 epoch_cyc_copy;
>   	unsigned long rate;
>   	u32 mult;
>   	u32 shift;
> @@ -35,24 +36,31 @@ static struct clock_data cd = {
>   	.mult	= NSEC_PER_SEC / HZ,
>   };
>   
> -static u32 __read_mostly sched_clock_mask = 0xffffffff;
> +static u64 __read_mostly sched_clock_mask;
>   
> -static u32 notrace jiffy_sched_clock_read(void)
> +static u64 notrace jiffy_sched_clock_read(void)
>   {
> -	return (u32)(jiffies - INITIAL_JIFFIES);
> +	return (u64)(jiffies - INITIAL_JIFFIES);
>   }

Also, you might add a comment noting you register jiffies w/ 
BITS_PER_LONG, to clarify don't have to use jiffies_64 here on 32bit 
systems (despite the u64 cast)?


thanks
-john


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCHv3 1/3] sched_clock: Add support for >32 bit sched_clock
  2013-06-06  0:38   ` John Stultz
@ 2013-06-06  1:43     ` Stephen Boyd
  0 siblings, 0 replies; 7+ messages in thread
From: Stephen Boyd @ 2013-06-06  1:43 UTC (permalink / raw)
  To: John Stultz
  Cc: linux-kernel, linux-arm-msm, linux-arm-kernel, Russell King, arm,
	Catalin Marinas, Will Deacon, Thomas Gleixner,
	Christopher Covington

On 06/05, John Stultz wrote:
> On 06/05/2013 04:54 PM, Stephen Boyd wrote:
> >
> >I've noticed that we probably need to update the mult/shift
> >calculation similar to how clocksources are done. Should we
> >just copy/paste the maxsec calculation code here or do something
> >smarter?
> 
> So, the clocksource calculation has an extra variable it has to
> balance, which is the granularity of ntp adjustments being made
> (since with higher shift values, we can make relatively smaller
> changes by +1 or -1 from mult).
> 
> sched_clock doesn't have to deal with frequency adjustments, so the
> shift value just needs to be high enough to be able to accurately
> express the desired counter frequency.  Too high and you risk
> multiplication overflows if there are large gaps between updates,
> too low though and you run into possible accuracy issues (though I
> hope there isn't much that's using sched_clock for long-term timing
> where slight accuracy issues would be problematic).
> 
> So I think its ok if the sched_clock code uses its own logic for
> calculating the mult/shift pair, since the constraints are different
> then what we expect from timekeeping.
> 

I was thinking perhaps we can do the (1 << bits) / rate thing but
not limit it to 600 seconds. Instead let it be as big as it
actually is? Right now it's actually better to register as a 32
bit clock because the wraparound comes out to be larger when
maxsec is 0.

> 
> >
> >  include/linux/sched_clock.h |  1 +
> >  kernel/time/sched_clock.c   | 41 +++++++++++++++++++++++++++--------------
> >  2 files changed, 28 insertions(+), 14 deletions(-)
> >
> >diff --git a/include/linux/sched_clock.h b/include/linux/sched_clock.h
> >index fa7922c..81baaef 100644
> >--- a/include/linux/sched_clock.h
> >+++ b/include/linux/sched_clock.h
> >@@ -15,6 +15,7 @@ static inline void sched_clock_postinit(void) { }
> >  #endif
> >  extern void setup_sched_clock(u32 (*read)(void), int bits, unsigned long rate);
> >+extern void sched_clock_setup(u64 (*read)(void), int bits, unsigned long rate);
> 
> Eww. This sort of word-swizzled function names makes patch reviewing a pain.

How about sched_clock_register() or register_sched_clock()?

> 
> I know you're trying to deprecate the old function and provide a
> smooth transition, but would you also consider including follow-on
> patch/patches with this set that converts the existing
> setup_sched_clock usage (at least just the ones in
> drivers/clocksource?) so it doesn't stick around forever?
> 
> And if not, at least add a clear comment here, and maybe some build
> warnings to the old function so the driver owners know to make the
> conversion happen quickly.

Yes I plan to send out the conversion patches and deprecate the
function if this is acceptable. Then we can remove the function
after the merge window is over and all stragglers are converted.

> 
> 
> 
> >  extern unsigned long long (*sched_clock_func)(void);
> >diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c
> >index aad1ae6..3478b6d 100644
> >--- a/kernel/time/sched_clock.c
> >+++ b/kernel/time/sched_clock.c
> >@@ -35,24 +36,31 @@ static struct clock_data cd = {
> >  	.mult	= NSEC_PER_SEC / HZ,
> >  };
> >-static u32 __read_mostly sched_clock_mask = 0xffffffff;
> >+static u64 __read_mostly sched_clock_mask;
> >-static u32 notrace jiffy_sched_clock_read(void)
> >+static u64 notrace jiffy_sched_clock_read(void)
> >  {
> >-	return (u32)(jiffies - INITIAL_JIFFIES);
> >+	return (u64)(jiffies - INITIAL_JIFFIES);
> >  }
> 
> Also, you might add a comment noting you register jiffies w/
> BITS_PER_LONG, to clarify don't have to use jiffies_64 here on 32bit
> systems (despite the u64 cast)?

Sure. Perhaps it is clearer if we don't have the u64 cast here at
all?

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCHv3 3/3] arm64: Move to generic sched_clock infrastructure
  2013-06-05 23:54 ` [PATCHv3 3/3] arm64: Move to generic sched_clock infrastructure Stephen Boyd
@ 2013-06-12 18:51   ` Christopher Covington
  0 siblings, 0 replies; 7+ messages in thread
From: Christopher Covington @ 2013-06-12 18:51 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: John Stultz, linux-kernel, linux-arm-msm, linux-arm-kernel,
	Russell King, arm, Catalin Marinas, Will Deacon, Thomas Gleixner

On 06/05/2013 07:54 PM, Stephen Boyd wrote:
> Use the generic sched_clock infrastructure instead of rolling our
> own. This has the added benefit of fixing suspend/resume as
> outlined in 6a4dae5 (ARM: 7565/1: sched: stop sched_clock()
> during suspend, 2012-10-23) and correcting the timestamps when
> the hardware returns a value instead of 0 upon the first read.

Builds and runs for me on software models.

Tested-by: Christopher Covington <cov@codeaurora.org>

Cheers,
Christopher

-- 
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by the Linux Foundation.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2013-06-12 18:51 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-06-05 23:54 [PATCHv3 0/3] 64bit friendly generic sched_clock Stephen Boyd
2013-06-05 23:54 ` [PATCHv3 1/3] sched_clock: Add support for >32 bit sched_clock Stephen Boyd
2013-06-06  0:38   ` John Stultz
2013-06-06  1:43     ` Stephen Boyd
2013-06-05 23:54 ` [PATCHv3 2/3] ARM: arch_timer: Move to generic sched_clock framework Stephen Boyd
2013-06-05 23:54 ` [PATCHv3 3/3] arm64: Move to generic sched_clock infrastructure Stephen Boyd
2013-06-12 18:51   ` Christopher Covington

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).