* [patch 0/6] timekeeping: Cure the signed/unsigned wreckage @ 2016-12-08 20:49 Thomas Gleixner 2016-12-08 20:49 ` [patch 1/6] timekeeping: Force unsigned clocksource to nanoseconds conversion Thomas Gleixner ` (7 more replies) 0 siblings, 8 replies; 35+ messages in thread From: Thomas Gleixner @ 2016-12-08 20:49 UTC (permalink / raw) To: LKML Cc: John Stultz, Peter Zijlstra, Ingo Molnar, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall This series addresses the recently reintroduced signed vs. unsigned wreckage by cleaning up the whole call chain instead of just making a simple s64 -> u64 'fix' at one point and keeping the rest signed, which eventually led to the unintended signed conversion and brought back an issue that was fixed a year ago already. Here is the queue: timekeeping: Force unsigned clocksource to nanoseconds conversions timekeeping: Make the conversion call chain consistently unsigned timekeeping: Get rid of pointless typecasts These three patches are definitely urgent material timekeeping: Use mul_u64_u32_shr() instead of open coding it Can wait for 4.11, but for sanity reasons it should go into 4.10 [RFD] timekeeping: Provide optional 128bit math This is material for discussion. I'm not sure if we want to do that at all, but it addresses the insanities of long time scheduled out VMs. [RFD] timekeeping: Get rid of cycle_t This one cannot be merged right away as there are further cycles_t users in next. I merily added it for reference and it can be done around rc1 time by running a coccinelle script. Thanks, tglx ^ permalink raw reply [flat|nested] 35+ messages in thread
* [patch 1/6] timekeeping: Force unsigned clocksource to nanoseconds conversion 2016-12-08 20:49 [patch 0/6] timekeeping: Cure the signed/unsigned wreckage Thomas Gleixner @ 2016-12-08 20:49 ` Thomas Gleixner 2016-12-08 23:38 ` David Gibson 2016-12-09 11:13 ` [tip:timers/core] timekeeping_Force_unsigned_clocksource_to_nanoseconds_conversion tip-bot for Thomas Gleixner 2016-12-08 20:49 ` [patch 2/6] timekeeping: Make the conversion call chain consistently unsigned Thomas Gleixner ` (6 subsequent siblings) 7 siblings, 2 replies; 35+ messages in thread From: Thomas Gleixner @ 2016-12-08 20:49 UTC (permalink / raw) To: LKML Cc: John Stultz, Peter Zijlstra, Ingo Molnar, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall [-- Attachment #1: timekeeping--Force-unsigned-conversions.patch --] [-- Type: text/plain, Size: 4972 bytes --] The clocksource delta to nanoseconds conversion is using signed math, but the delta is unsigned. This makes the conversion space smaller than necessary and in case of a multiplication overflow the conversion can become negative. The conversion is done with scaled math: s64 nsec_delta = ((s64)clkdelta * clk->mult) >> clk->shift; Shifting a signed integer right obvioulsy preserves the sign, which has interesting consequences: - Time jumps backwards - __iter_div_u64_rem() which is used in one of the calling code pathes will take forever to piecewise calculate the seconds/nanoseconds part. This has been reported by several people with different scenarios: David observed that when stopping a VM with a debugger: "It was essentially the stopped by debugger case. I forget exactly why, but the guest was being explicitly stopped from outside, it wasn't just scheduling lag. I think it was something in the vicinity of 10 minutes stopped." When lifting the stop the machine went dead. The stopped by debugger case is not really interesting, but nevertheless it would be a good thing not to die completely. But this was also observed on a live system by Liav: "When the OS is too overloaded, delta will get a high enough value for the msb of the sum delta * tkr->mult + tkr->xtime_nsec to be set, and so after the shift the nsec variable will gain a value similar to 0xffffffffff000000." Unfortunately this has been reintroduced recently with commit 6bd58f09e1d8 ("time: Add cycles to nanoseconds translation"). It had been fixed a year ago already in commit 35a4933a8959 ("time: Avoid signed overflow in timekeeping_get_ns()"). Though it's not surprising that the issue has been reintroduced because the function itself and the whole call chain uses s64 for the result and the propagation of it. The change in this recent commit is subtle: s64 nsec; - nsec = (d * m + n) >> s: + nsec = d * m + n; + nsec >>= s; d being type of cycles_t adds another level of obfuscation. This wouldn't have happened if the previous change to unsigned computation would have made the 'nsec' variable u64 right away and a follow up patch had cleaned up the whole call chain. There have been patches submitted which basically did a revert of the above patch leaving everything else unchanged as signed. Back to square one. This spawned a admittedly pointless discussion about potential users which rely on the unsigned behaviour until someone pointed out that it had been fixed before. The changelogs of said patches added further confusion as they made finally false claims about the consequences for eventual users which expect signed results. Despite delta being cycles_t, aka. u64, it's very well possible to hand in a signed negative value and the signed computation will happily return the correct result. But nobody actually sat down and analyzed the code which was added as user after the propably unintended signed conversion. Though in sensitive code like this it's better to analyze it proper and make sure that nothing relies on this than hunting the subtle wreckage half a year later. After analyzing all call chains it stands that no caller can hand in a negative value (which actually would work due to the s64 cast) and rely on the signed math to do the right thing. Change the conversion function to unsigned math. The conversion of all call chains is done in a follow up patch. This solves the starvation issue, which was caused by the negative result, but it does not solve the underlying problem. It merily procrastinates it. When the timekeeper update is deferred long enough that the unsigned multiplication overflows, then time going backwards is observable again. It does neither solve the issue of clocksources with a small counter width which will wrap around possibly several times and cause random time stamps to be generated. But those are usually not found on systems used for virtualization, so this is likely a non issue. I took the liberty to claim authorship for this simply because analyzing all callsites and writing the changelog took substantially more time than just making the simple s/s64/u64/ change and ignore the rest. Fixes: 6bd58f09e1d8 ("time: Add cycles to nanoseconds translation") Reported-by: David Gibson <david@gibson.dropbear.id.au> Reported-by: Liav Rehana <liavr@mellanox.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- kernel/time/timekeeping.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -299,10 +299,10 @@ u32 (*arch_gettimeoffset)(void) = defaul static inline u32 arch_gettimeoffset(void) { return 0; } #endif -static inline s64 timekeeping_delta_to_ns(struct tk_read_base *tkr, +static inline u64 timekeeping_delta_to_ns(struct tk_read_base *tkr, cycle_t delta) { - s64 nsec; + u64 nsec; nsec = delta * tkr->mult + tkr->xtime_nsec; nsec >>= tkr->shift; ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [patch 1/6] timekeeping: Force unsigned clocksource to nanoseconds conversion 2016-12-08 20:49 ` [patch 1/6] timekeeping: Force unsigned clocksource to nanoseconds conversion Thomas Gleixner @ 2016-12-08 23:38 ` David Gibson 2016-12-09 11:13 ` [tip:timers/core] timekeeping_Force_unsigned_clocksource_to_nanoseconds_conversion tip-bot for Thomas Gleixner 1 sibling, 0 replies; 35+ messages in thread From: David Gibson @ 2016-12-08 23:38 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, John Stultz, Peter Zijlstra, Ingo Molnar, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall [-- Attachment #1: Type: text/plain, Size: 5654 bytes --] On Thu, Dec 08, 2016 at 08:49:32PM -0000, Thomas Gleixner wrote: > The clocksource delta to nanoseconds conversion is using signed math, but > the delta is unsigned. This makes the conversion space smaller than > necessary and in case of a multiplication overflow the conversion can > become negative. The conversion is done with scaled math: > > s64 nsec_delta = ((s64)clkdelta * clk->mult) >> clk->shift; > > Shifting a signed integer right obvioulsy preserves the sign, which has > interesting consequences: > > - Time jumps backwards > > - __iter_div_u64_rem() which is used in one of the calling code pathes > will take forever to piecewise calculate the seconds/nanoseconds part. > > This has been reported by several people with different scenarios: > > David observed that when stopping a VM with a debugger: > > "It was essentially the stopped by debugger case. I forget exactly why, > but the guest was being explicitly stopped from outside, it wasn't just > scheduling lag. I think it was something in the vicinity of 10 minutes > stopped." > > When lifting the stop the machine went dead. > > The stopped by debugger case is not really interesting, but nevertheless it > would be a good thing not to die completely. > > But this was also observed on a live system by Liav: > > "When the OS is too overloaded, delta will get a high enough value for the > msb of the sum delta * tkr->mult + tkr->xtime_nsec to be set, and so > after the shift the nsec variable will gain a value similar to > 0xffffffffff000000." > > Unfortunately this has been reintroduced recently with commit 6bd58f09e1d8 > ("time: Add cycles to nanoseconds translation"). It had been fixed a year > ago already in commit 35a4933a8959 ("time: Avoid signed overflow in > timekeeping_get_ns()"). > > Though it's not surprising that the issue has been reintroduced because the > function itself and the whole call chain uses s64 for the result and the > propagation of it. The change in this recent commit is subtle: > > s64 nsec; > > - nsec = (d * m + n) >> s: > + nsec = d * m + n; > + nsec >>= s; > > d being type of cycles_t adds another level of obfuscation. > > This wouldn't have happened if the previous change to unsigned computation > would have made the 'nsec' variable u64 right away and a follow up patch > had cleaned up the whole call chain. > > There have been patches submitted which basically did a revert of the above > patch leaving everything else unchanged as signed. Back to square one. This > spawned a admittedly pointless discussion about potential users which rely > on the unsigned behaviour until someone pointed out that it had been fixed > before. The changelogs of said patches added further confusion as they made > finally false claims about the consequences for eventual users which expect > signed results. > > Despite delta being cycles_t, aka. u64, it's very well possible to hand in > a signed negative value and the signed computation will happily return the > correct result. But nobody actually sat down and analyzed the code which > was added as user after the propably unintended signed conversion. > > Though in sensitive code like this it's better to analyze it proper and > make sure that nothing relies on this than hunting the subtle wreckage half > a year later. After analyzing all call chains it stands that no caller can > hand in a negative value (which actually would work due to the s64 cast) > and rely on the signed math to do the right thing. > > Change the conversion function to unsigned math. The conversion of all call > chains is done in a follow up patch. > > This solves the starvation issue, which was caused by the negative result, > but it does not solve the underlying problem. It merily procrastinates > it. When the timekeeper update is deferred long enough that the unsigned > multiplication overflows, then time going backwards is observable again. > > It does neither solve the issue of clocksources with a small counter width > which will wrap around possibly several times and cause random time stamps > to be generated. But those are usually not found on systems used for > virtualization, so this is likely a non issue. > > I took the liberty to claim authorship for this simply because > analyzing all callsites and writing the changelog took substantially > more time than just making the simple s/s64/u64/ change and ignore the > rest. > > Fixes: 6bd58f09e1d8 ("time: Add cycles to nanoseconds translation") > Reported-by: David Gibson <david@gibson.dropbear.id.au> > Reported-by: Liav Rehana <liavr@mellanox.com> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> > --- > kernel/time/timekeeping.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > --- a/kernel/time/timekeeping.c > +++ b/kernel/time/timekeeping.c > @@ -299,10 +299,10 @@ u32 (*arch_gettimeoffset)(void) = defaul > static inline u32 arch_gettimeoffset(void) { return 0; } > #endif > > -static inline s64 timekeeping_delta_to_ns(struct tk_read_base *tkr, > +static inline u64 timekeeping_delta_to_ns(struct tk_read_base *tkr, > cycle_t delta) > { > - s64 nsec; > + u64 nsec; > > nsec = delta * tkr->mult + tkr->xtime_nsec; > nsec >>= tkr->shift; > > -- David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 819 bytes --] ^ permalink raw reply [flat|nested] 35+ messages in thread
* [tip:timers/core] timekeeping_Force_unsigned_clocksource_to_nanoseconds_conversion 2016-12-08 20:49 ` [patch 1/6] timekeeping: Force unsigned clocksource to nanoseconds conversion Thomas Gleixner 2016-12-08 23:38 ` David Gibson @ 2016-12-09 11:13 ` tip-bot for Thomas Gleixner 1 sibling, 0 replies; 35+ messages in thread From: tip-bot for Thomas Gleixner @ 2016-12-09 11:13 UTC (permalink / raw) To: linux-tip-commits Cc: cmetcalf, linux-kernel, tglx, david, john.stultz, christopher.s.hall, mingo, lvivier, prarit, hpa, liavr, richardcochran, peterz Commit-ID: 9c1645727b8fa90d07256fdfcc45bf831242a3ab Gitweb: http://git.kernel.org/tip/9c1645727b8fa90d07256fdfcc45bf831242a3ab Author: Thomas Gleixner <tglx@linutronix.de> AuthorDate: Thu, 8 Dec 2016 20:49:32 +0000 Committer: Thomas Gleixner <tglx@linutronix.de> CommitDate: Fri, 9 Dec 2016 12:06:41 +0100 timekeeping_Force_unsigned_clocksource_to_nanoseconds_conversion The clocksource delta to nanoseconds conversion is using signed math, but the delta is unsigned. This makes the conversion space smaller than necessary and in case of a multiplication overflow the conversion can become negative. The conversion is done with scaled math: s64 nsec_delta = ((s64)clkdelta * clk->mult) >> clk->shift; Shifting a signed integer right obvioulsy preserves the sign, which has interesting consequences: - Time jumps backwards - __iter_div_u64_rem() which is used in one of the calling code pathes will take forever to piecewise calculate the seconds/nanoseconds part. This has been reported by several people with different scenarios: David observed that when stopping a VM with a debugger: "It was essentially the stopped by debugger case. I forget exactly why, but the guest was being explicitly stopped from outside, it wasn't just scheduling lag. I think it was something in the vicinity of 10 minutes stopped." When lifting the stop the machine went dead. The stopped by debugger case is not really interesting, but nevertheless it would be a good thing not to die completely. But this was also observed on a live system by Liav: "When the OS is too overloaded, delta will get a high enough value for the msb of the sum delta * tkr->mult + tkr->xtime_nsec to be set, and so after the shift the nsec variable will gain a value similar to 0xffffffffff000000." Unfortunately this has been reintroduced recently with commit 6bd58f09e1d8 ("time: Add cycles to nanoseconds translation"). It had been fixed a year ago already in commit 35a4933a8959 ("time: Avoid signed overflow in timekeeping_get_ns()"). Though it's not surprising that the issue has been reintroduced because the function itself and the whole call chain uses s64 for the result and the propagation of it. The change in this recent commit is subtle: s64 nsec; - nsec = (d * m + n) >> s: + nsec = d * m + n; + nsec >>= s; d being type of cycle_t adds another level of obfuscation. This wouldn't have happened if the previous change to unsigned computation would have made the 'nsec' variable u64 right away and a follow up patch had cleaned up the whole call chain. There have been patches submitted which basically did a revert of the above patch leaving everything else unchanged as signed. Back to square one. This spawned a admittedly pointless discussion about potential users which rely on the unsigned behaviour until someone pointed out that it had been fixed before. The changelogs of said patches added further confusion as they made finally false claims about the consequences for eventual users which expect signed results. Despite delta being cycle_t, aka. u64, it's very well possible to hand in a signed negative value and the signed computation will happily return the correct result. But nobody actually sat down and analyzed the code which was added as user after the propably unintended signed conversion. Though in sensitive code like this it's better to analyze it proper and make sure that nothing relies on this than hunting the subtle wreckage half a year later. After analyzing all call chains it stands that no caller can hand in a negative value (which actually would work due to the s64 cast) and rely on the signed math to do the right thing. Change the conversion function to unsigned math. The conversion of all call chains is done in a follow up patch. This solves the starvation issue, which was caused by the negative result, but it does not solve the underlying problem. It merily procrastinates it. When the timekeeper update is deferred long enough that the unsigned multiplication overflows, then time going backwards is observable again. It does neither solve the issue of clocksources with a small counter width which will wrap around possibly several times and cause random time stamps to be generated. But those are usually not found on systems used for virtualization, so this is likely a non issue. I took the liberty to claim authorship for this simply because analyzing all callsites and writing the changelog took substantially more time than just making the simple s/s64/u64/ change and ignore the rest. Fixes: 6bd58f09e1d8 ("time: Add cycles to nanoseconds translation") Reported-by: David Gibson <david@gibson.dropbear.id.au> Reported-by: Liav Rehana <liavr@mellanox.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Parit Bhargava <prarit@redhat.com> Cc: Laurent Vivier <lvivier@redhat.com> Cc: "Christopher S. Hall" <christopher.s.hall@intel.com> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: Richard Cochran <richardcochran@gmail.com> Cc: John Stultz <john.stultz@linaro.org> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20161208204228.688545601@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- kernel/time/timekeeping.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c index b2286e9..bfe589e 100644 --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -299,10 +299,10 @@ u32 (*arch_gettimeoffset)(void) = default_arch_gettimeoffset; static inline u32 arch_gettimeoffset(void) { return 0; } #endif -static inline s64 timekeeping_delta_to_ns(struct tk_read_base *tkr, +static inline u64 timekeeping_delta_to_ns(struct tk_read_base *tkr, cycle_t delta) { - s64 nsec; + u64 nsec; nsec = delta * tkr->mult + tkr->xtime_nsec; nsec >>= tkr->shift; ^ permalink raw reply related [flat|nested] 35+ messages in thread
* [patch 2/6] timekeeping: Make the conversion call chain consistently unsigned 2016-12-08 20:49 [patch 0/6] timekeeping: Cure the signed/unsigned wreckage Thomas Gleixner 2016-12-08 20:49 ` [patch 1/6] timekeeping: Force unsigned clocksource to nanoseconds conversion Thomas Gleixner @ 2016-12-08 20:49 ` Thomas Gleixner 2016-12-08 23:39 ` David Gibson 2016-12-09 11:13 ` [tip:timers/core] " tip-bot for Thomas Gleixner 2016-12-08 20:49 ` [patch 3/6] timekeeping: Get rid of pointless typecasts Thomas Gleixner ` (5 subsequent siblings) 7 siblings, 2 replies; 35+ messages in thread From: Thomas Gleixner @ 2016-12-08 20:49 UTC (permalink / raw) To: LKML Cc: John Stultz, Peter Zijlstra, Ingo Molnar, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall [-- Attachment #1: timekeeping--Make-the-conversion-call-chain-consistently-unsigned.patch --] [-- Type: text/plain, Size: 3144 bytes --] Propagating a unsigned value through signed variables and functions makes absolutely no sense and is just prone to (re)introduce subtle signed vs. unsigned issues as happened recently. Clean it up. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- kernel/time/timekeeping.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -311,7 +311,7 @@ static inline u64 timekeeping_delta_to_n return nsec + arch_gettimeoffset(); } -static inline s64 timekeeping_get_ns(struct tk_read_base *tkr) +static inline u64 timekeeping_get_ns(struct tk_read_base *tkr) { cycle_t delta; @@ -319,8 +319,8 @@ static inline s64 timekeeping_get_ns(str return timekeeping_delta_to_ns(tkr, delta); } -static inline s64 timekeeping_cycles_to_ns(struct tk_read_base *tkr, - cycle_t cycles) +static inline u64 timekeeping_cycles_to_ns(struct tk_read_base *tkr, + cycle_t cycles) { cycle_t delta; @@ -623,7 +623,7 @@ static void timekeeping_forward_now(stru { struct clocksource *clock = tk->tkr_mono.clock; cycle_t cycle_now, delta; - s64 nsec; + u64 nsec; cycle_now = tk->tkr_mono.read(clock); delta = clocksource_delta(cycle_now, tk->tkr_mono.cycle_last, tk->tkr_mono.mask); @@ -652,7 +652,7 @@ int __getnstimeofday64(struct timespec64 { struct timekeeper *tk = &tk_core.timekeeper; unsigned long seq; - s64 nsecs = 0; + u64 nsecs; do { seq = read_seqcount_begin(&tk_core.seq); @@ -692,7 +692,7 @@ ktime_t ktime_get(void) struct timekeeper *tk = &tk_core.timekeeper; unsigned int seq; ktime_t base; - s64 nsecs; + u64 nsecs; WARN_ON(timekeeping_suspended); @@ -735,7 +735,7 @@ ktime_t ktime_get_with_offset(enum tk_of struct timekeeper *tk = &tk_core.timekeeper; unsigned int seq; ktime_t base, *offset = offsets[offs]; - s64 nsecs; + u64 nsecs; WARN_ON(timekeeping_suspended); @@ -779,7 +779,7 @@ ktime_t ktime_get_raw(void) struct timekeeper *tk = &tk_core.timekeeper; unsigned int seq; ktime_t base; - s64 nsecs; + u64 nsecs; do { seq = read_seqcount_begin(&tk_core.seq); @@ -804,8 +804,8 @@ void ktime_get_ts64(struct timespec64 *t { struct timekeeper *tk = &tk_core.timekeeper; struct timespec64 tomono; - s64 nsec; unsigned int seq; + u64 nsec; WARN_ON(timekeeping_suspended); @@ -893,8 +893,8 @@ void ktime_get_snapshot(struct system_ti unsigned long seq; ktime_t base_raw; ktime_t base_real; - s64 nsec_raw; - s64 nsec_real; + u64 nsec_raw; + u64 nsec_real; cycle_t now; WARN_ON_ONCE(timekeeping_suspended); @@ -1052,7 +1052,7 @@ int get_device_system_crosststamp(int (* cycle_t cycles, now, interval_start; unsigned int clock_was_set_seq = 0; ktime_t base_real, base_raw; - s64 nsec_real, nsec_raw; + u64 nsec_real, nsec_raw; u8 cs_was_changed_seq; unsigned long seq; bool do_interp; @@ -1365,7 +1365,7 @@ void getrawmonotonic64(struct timespec64 struct timekeeper *tk = &tk_core.timekeeper; struct timespec64 ts64; unsigned long seq; - s64 nsecs; + u64 nsecs; do { seq = read_seqcount_begin(&tk_core.seq); ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [patch 2/6] timekeeping: Make the conversion call chain consistently unsigned 2016-12-08 20:49 ` [patch 2/6] timekeeping: Make the conversion call chain consistently unsigned Thomas Gleixner @ 2016-12-08 23:39 ` David Gibson 2016-12-09 11:13 ` [tip:timers/core] " tip-bot for Thomas Gleixner 1 sibling, 0 replies; 35+ messages in thread From: David Gibson @ 2016-12-08 23:39 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, John Stultz, Peter Zijlstra, Ingo Molnar, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall [-- Attachment #1: Type: text/plain, Size: 3826 bytes --] On Thu, Dec 08, 2016 at 08:49:34PM -0000, Thomas Gleixner wrote: > Propagating a unsigned value through signed variables and functions makes > absolutely no sense and is just prone to (re)introduce subtle signed > vs. unsigned issues as happened recently. > > Clean it up. > > Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> > --- > kernel/time/timekeeping.c | 26 +++++++++++++------------- > 1 file changed, 13 insertions(+), 13 deletions(-) > > --- a/kernel/time/timekeeping.c > +++ b/kernel/time/timekeeping.c > @@ -311,7 +311,7 @@ static inline u64 timekeeping_delta_to_n > return nsec + arch_gettimeoffset(); > } > > -static inline s64 timekeeping_get_ns(struct tk_read_base *tkr) > +static inline u64 timekeeping_get_ns(struct tk_read_base *tkr) > { > cycle_t delta; > > @@ -319,8 +319,8 @@ static inline s64 timekeeping_get_ns(str > return timekeeping_delta_to_ns(tkr, delta); > } > > -static inline s64 timekeeping_cycles_to_ns(struct tk_read_base *tkr, > - cycle_t cycles) > +static inline u64 timekeeping_cycles_to_ns(struct tk_read_base *tkr, > + cycle_t cycles) > { > cycle_t delta; > > @@ -623,7 +623,7 @@ static void timekeeping_forward_now(stru > { > struct clocksource *clock = tk->tkr_mono.clock; > cycle_t cycle_now, delta; > - s64 nsec; > + u64 nsec; > > cycle_now = tk->tkr_mono.read(clock); > delta = clocksource_delta(cycle_now, tk->tkr_mono.cycle_last, tk->tkr_mono.mask); > @@ -652,7 +652,7 @@ int __getnstimeofday64(struct timespec64 > { > struct timekeeper *tk = &tk_core.timekeeper; > unsigned long seq; > - s64 nsecs = 0; > + u64 nsecs; > > do { > seq = read_seqcount_begin(&tk_core.seq); > @@ -692,7 +692,7 @@ ktime_t ktime_get(void) > struct timekeeper *tk = &tk_core.timekeeper; > unsigned int seq; > ktime_t base; > - s64 nsecs; > + u64 nsecs; > > WARN_ON(timekeeping_suspended); > > @@ -735,7 +735,7 @@ ktime_t ktime_get_with_offset(enum tk_of > struct timekeeper *tk = &tk_core.timekeeper; > unsigned int seq; > ktime_t base, *offset = offsets[offs]; > - s64 nsecs; > + u64 nsecs; > > WARN_ON(timekeeping_suspended); > > @@ -779,7 +779,7 @@ ktime_t ktime_get_raw(void) > struct timekeeper *tk = &tk_core.timekeeper; > unsigned int seq; > ktime_t base; > - s64 nsecs; > + u64 nsecs; > > do { > seq = read_seqcount_begin(&tk_core.seq); > @@ -804,8 +804,8 @@ void ktime_get_ts64(struct timespec64 *t > { > struct timekeeper *tk = &tk_core.timekeeper; > struct timespec64 tomono; > - s64 nsec; > unsigned int seq; > + u64 nsec; > > WARN_ON(timekeeping_suspended); > > @@ -893,8 +893,8 @@ void ktime_get_snapshot(struct system_ti > unsigned long seq; > ktime_t base_raw; > ktime_t base_real; > - s64 nsec_raw; > - s64 nsec_real; > + u64 nsec_raw; > + u64 nsec_real; > cycle_t now; > > WARN_ON_ONCE(timekeeping_suspended); > @@ -1052,7 +1052,7 @@ int get_device_system_crosststamp(int (* > cycle_t cycles, now, interval_start; > unsigned int clock_was_set_seq = 0; > ktime_t base_real, base_raw; > - s64 nsec_real, nsec_raw; > + u64 nsec_real, nsec_raw; > u8 cs_was_changed_seq; > unsigned long seq; > bool do_interp; > @@ -1365,7 +1365,7 @@ void getrawmonotonic64(struct timespec64 > struct timekeeper *tk = &tk_core.timekeeper; > struct timespec64 ts64; > unsigned long seq; > - s64 nsecs; > + u64 nsecs; > > do { > seq = read_seqcount_begin(&tk_core.seq); > > -- David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 819 bytes --] ^ permalink raw reply [flat|nested] 35+ messages in thread
* [tip:timers/core] timekeeping: Make the conversion call chain consistently unsigned 2016-12-08 20:49 ` [patch 2/6] timekeeping: Make the conversion call chain consistently unsigned Thomas Gleixner 2016-12-08 23:39 ` David Gibson @ 2016-12-09 11:13 ` tip-bot for Thomas Gleixner 1 sibling, 0 replies; 35+ messages in thread From: tip-bot for Thomas Gleixner @ 2016-12-09 11:13 UTC (permalink / raw) To: linux-tip-commits Cc: prarit, mingo, john.stultz, hpa, tglx, linux-kernel, richardcochran, christopher.s.hall, liavr, peterz, cmetcalf, david, lvivier Commit-ID: acc89612a70e370a5640fd77a83f15b7b94d85e4 Gitweb: http://git.kernel.org/tip/acc89612a70e370a5640fd77a83f15b7b94d85e4 Author: Thomas Gleixner <tglx@linutronix.de> AuthorDate: Thu, 8 Dec 2016 20:49:34 +0000 Committer: Thomas Gleixner <tglx@linutronix.de> CommitDate: Fri, 9 Dec 2016 12:06:41 +0100 timekeeping: Make the conversion call chain consistently unsigned Propagating a unsigned value through signed variables and functions makes absolutely no sense and is just prone to (re)introduce subtle signed vs. unsigned issues as happened recently. Clean it up. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Parit Bhargava <prarit@redhat.com> Cc: Laurent Vivier <lvivier@redhat.com> Cc: "Christopher S. Hall" <christopher.s.hall@intel.com> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: Richard Cochran <richardcochran@gmail.com> Cc: Liav Rehana <liavr@mellanox.com> Cc: John Stultz <john.stultz@linaro.org> Link: http://lkml.kernel.org/r/20161208204228.765843099@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- kernel/time/timekeeping.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c index bfe589e..5244821 100644 --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -311,7 +311,7 @@ static inline u64 timekeeping_delta_to_ns(struct tk_read_base *tkr, return nsec + arch_gettimeoffset(); } -static inline s64 timekeeping_get_ns(struct tk_read_base *tkr) +static inline u64 timekeeping_get_ns(struct tk_read_base *tkr) { cycle_t delta; @@ -319,8 +319,8 @@ static inline s64 timekeeping_get_ns(struct tk_read_base *tkr) return timekeeping_delta_to_ns(tkr, delta); } -static inline s64 timekeeping_cycles_to_ns(struct tk_read_base *tkr, - cycle_t cycles) +static inline u64 timekeeping_cycles_to_ns(struct tk_read_base *tkr, + cycle_t cycles) { cycle_t delta; @@ -652,7 +652,7 @@ static void timekeeping_forward_now(struct timekeeper *tk) { struct clocksource *clock = tk->tkr_mono.clock; cycle_t cycle_now, delta; - s64 nsec; + u64 nsec; cycle_now = tk->tkr_mono.read(clock); delta = clocksource_delta(cycle_now, tk->tkr_mono.cycle_last, tk->tkr_mono.mask); @@ -681,7 +681,7 @@ int __getnstimeofday64(struct timespec64 *ts) { struct timekeeper *tk = &tk_core.timekeeper; unsigned long seq; - s64 nsecs = 0; + u64 nsecs; do { seq = read_seqcount_begin(&tk_core.seq); @@ -721,7 +721,7 @@ ktime_t ktime_get(void) struct timekeeper *tk = &tk_core.timekeeper; unsigned int seq; ktime_t base; - s64 nsecs; + u64 nsecs; WARN_ON(timekeeping_suspended); @@ -764,7 +764,7 @@ ktime_t ktime_get_with_offset(enum tk_offsets offs) struct timekeeper *tk = &tk_core.timekeeper; unsigned int seq; ktime_t base, *offset = offsets[offs]; - s64 nsecs; + u64 nsecs; WARN_ON(timekeeping_suspended); @@ -808,7 +808,7 @@ ktime_t ktime_get_raw(void) struct timekeeper *tk = &tk_core.timekeeper; unsigned int seq; ktime_t base; - s64 nsecs; + u64 nsecs; do { seq = read_seqcount_begin(&tk_core.seq); @@ -833,8 +833,8 @@ void ktime_get_ts64(struct timespec64 *ts) { struct timekeeper *tk = &tk_core.timekeeper; struct timespec64 tomono; - s64 nsec; unsigned int seq; + u64 nsec; WARN_ON(timekeeping_suspended); @@ -922,8 +922,8 @@ void ktime_get_snapshot(struct system_time_snapshot *systime_snapshot) unsigned long seq; ktime_t base_raw; ktime_t base_real; - s64 nsec_raw; - s64 nsec_real; + u64 nsec_raw; + u64 nsec_real; cycle_t now; WARN_ON_ONCE(timekeeping_suspended); @@ -1081,7 +1081,7 @@ int get_device_system_crosststamp(int (*get_time_fn) cycle_t cycles, now, interval_start; unsigned int clock_was_set_seq = 0; ktime_t base_real, base_raw; - s64 nsec_real, nsec_raw; + u64 nsec_real, nsec_raw; u8 cs_was_changed_seq; unsigned long seq; bool do_interp; @@ -1394,7 +1394,7 @@ void getrawmonotonic64(struct timespec64 *ts) struct timekeeper *tk = &tk_core.timekeeper; struct timespec64 ts64; unsigned long seq; - s64 nsecs; + u64 nsecs; do { seq = read_seqcount_begin(&tk_core.seq); ^ permalink raw reply related [flat|nested] 35+ messages in thread
* [patch 3/6] timekeeping: Get rid of pointless typecasts 2016-12-08 20:49 [patch 0/6] timekeeping: Cure the signed/unsigned wreckage Thomas Gleixner 2016-12-08 20:49 ` [patch 1/6] timekeeping: Force unsigned clocksource to nanoseconds conversion Thomas Gleixner 2016-12-08 20:49 ` [patch 2/6] timekeeping: Make the conversion call chain consistently unsigned Thomas Gleixner @ 2016-12-08 20:49 ` Thomas Gleixner 2016-12-08 23:40 ` David Gibson 2016-12-09 11:14 ` [tip:timers/core] " tip-bot for Thomas Gleixner 2016-12-08 20:49 ` [patch 4/6] timekeeping: Use mul_u64_u32_shr() instead of open coding it Thomas Gleixner ` (4 subsequent siblings) 7 siblings, 2 replies; 35+ messages in thread From: Thomas Gleixner @ 2016-12-08 20:49 UTC (permalink / raw) To: LKML Cc: John Stultz, Peter Zijlstra, Ingo Molnar, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall [-- Attachment #1: timekeeping--Get-rid-of-pointless-typecasts.patch --] [-- Type: text/plain, Size: 904 bytes --] cycles_t is defined as u64, so casting it to u64 is a pointless and confusing exercise. cycles_t should simply go away and be replaced with a plain u64 to avoid further confusion. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- kernel/time/timekeeping.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -258,10 +258,9 @@ static void tk_setup_internals(struct ti tk->cycle_interval = interval; /* Go back from cycles -> shifted ns */ - tk->xtime_interval = (u64) interval * clock->mult; + tk->xtime_interval = interval * clock->mult; tk->xtime_remainder = ntpinterval - tk->xtime_interval; - tk->raw_interval = - ((u64) interval * clock->mult) >> clock->shift; + tk->raw_interval = (interval * clock->mult) >> clock->shift; /* if changing clocks, convert xtime_nsec shift units */ if (old_clock) { ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [patch 3/6] timekeeping: Get rid of pointless typecasts 2016-12-08 20:49 ` [patch 3/6] timekeeping: Get rid of pointless typecasts Thomas Gleixner @ 2016-12-08 23:40 ` David Gibson 2016-12-09 11:14 ` [tip:timers/core] " tip-bot for Thomas Gleixner 1 sibling, 0 replies; 35+ messages in thread From: David Gibson @ 2016-12-08 23:40 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, John Stultz, Peter Zijlstra, Ingo Molnar, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall [-- Attachment #1: Type: text/plain, Size: 1307 bytes --] On Thu, Dec 08, 2016 at 08:49:36PM -0000, Thomas Gleixner wrote: > cycles_t is defined as u64, so casting it to u64 is a pointless and > confusing exercise. cycles_t should simply go away and be replaced with a > plain u64 to avoid further confusion. > > Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> > --- > kernel/time/timekeeping.c | 5 ++--- > 1 file changed, 2 insertions(+), 3 deletions(-) > > --- a/kernel/time/timekeeping.c > +++ b/kernel/time/timekeeping.c > @@ -258,10 +258,9 @@ static void tk_setup_internals(struct ti > tk->cycle_interval = interval; > > /* Go back from cycles -> shifted ns */ > - tk->xtime_interval = (u64) interval * clock->mult; > + tk->xtime_interval = interval * clock->mult; > tk->xtime_remainder = ntpinterval - tk->xtime_interval; > - tk->raw_interval = > - ((u64) interval * clock->mult) >> clock->shift; > + tk->raw_interval = (interval * clock->mult) >> clock->shift; > > /* if changing clocks, convert xtime_nsec shift units */ > if (old_clock) { > > -- David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 819 bytes --] ^ permalink raw reply [flat|nested] 35+ messages in thread
* [tip:timers/core] timekeeping: Get rid of pointless typecasts 2016-12-08 20:49 ` [patch 3/6] timekeeping: Get rid of pointless typecasts Thomas Gleixner 2016-12-08 23:40 ` David Gibson @ 2016-12-09 11:14 ` tip-bot for Thomas Gleixner 1 sibling, 0 replies; 35+ messages in thread From: tip-bot for Thomas Gleixner @ 2016-12-09 11:14 UTC (permalink / raw) To: linux-tip-commits Cc: prarit, liavr, john.stultz, cmetcalf, lvivier, tglx, david, linux-kernel, christopher.s.hall, peterz, richardcochran, mingo, hpa Commit-ID: cbd99e3b289e43000c29aa4aa9b94b394cdc68bd Gitweb: http://git.kernel.org/tip/cbd99e3b289e43000c29aa4aa9b94b394cdc68bd Author: Thomas Gleixner <tglx@linutronix.de> AuthorDate: Thu, 8 Dec 2016 20:49:36 +0000 Committer: Thomas Gleixner <tglx@linutronix.de> CommitDate: Fri, 9 Dec 2016 12:06:42 +0100 timekeeping: Get rid of pointless typecasts cycle_t is defined as u64, so casting it to u64 is a pointless and confusing exercise. cycle_t should simply go away and be replaced with a plain u64 to avoid further confusion. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Parit Bhargava <prarit@redhat.com> Cc: Laurent Vivier <lvivier@redhat.com> Cc: "Christopher S. Hall" <christopher.s.hall@intel.com> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: Richard Cochran <richardcochran@gmail.com> Cc: Liav Rehana <liavr@mellanox.com> Cc: John Stultz <john.stultz@linaro.org> Link: http://lkml.kernel.org/r/20161208204228.844699737@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- kernel/time/timekeeping.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c index 5244821..82e1b5c 100644 --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -258,10 +258,9 @@ static void tk_setup_internals(struct timekeeper *tk, struct clocksource *clock) tk->cycle_interval = interval; /* Go back from cycles -> shifted ns */ - tk->xtime_interval = (u64) interval * clock->mult; + tk->xtime_interval = interval * clock->mult; tk->xtime_remainder = ntpinterval - tk->xtime_interval; - tk->raw_interval = - ((u64) interval * clock->mult) >> clock->shift; + tk->raw_interval = (interval * clock->mult) >> clock->shift; /* if changing clocks, convert xtime_nsec shift units */ if (old_clock) { ^ permalink raw reply related [flat|nested] 35+ messages in thread
* [patch 4/6] timekeeping: Use mul_u64_u32_shr() instead of open coding it 2016-12-08 20:49 [patch 0/6] timekeeping: Cure the signed/unsigned wreckage Thomas Gleixner ` (2 preceding siblings ...) 2016-12-08 20:49 ` [patch 3/6] timekeeping: Get rid of pointless typecasts Thomas Gleixner @ 2016-12-08 20:49 ` Thomas Gleixner 2016-12-08 23:41 ` David Gibson 2016-12-09 11:14 ` [tip:timers/core] " tip-bot for Thomas Gleixner 2016-12-08 20:49 ` [patch 5/6] [RFD] timekeeping: Provide optional 128bit math Thomas Gleixner ` (3 subsequent siblings) 7 siblings, 2 replies; 35+ messages in thread From: Thomas Gleixner @ 2016-12-08 20:49 UTC (permalink / raw) To: LKML Cc: John Stultz, Peter Zijlstra, Ingo Molnar, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall [-- Attachment #1: timekeeping--Use-mul_u64_u32_shr---instead-of-open-coding-it.patch --] [-- Type: text/plain, Size: 1806 bytes --] The resume code must deal with a clocksource delta which is potentially big enough to overflow the 64bit mult. Replace the open coded handling with the proper function. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- kernel/time/timekeeping.c | 26 +++++--------------------- 1 file changed, 5 insertions(+), 21 deletions(-) --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -1615,7 +1615,7 @@ void timekeeping_resume(void) struct clocksource *clock = tk->tkr_mono.clock; unsigned long flags; struct timespec64 ts_new, ts_delta; - cycle_t cycle_now, cycle_delta; + cycle_t cycle_now; sleeptime_injected = false; read_persistent_clock64(&ts_new); @@ -1641,27 +1641,11 @@ void timekeeping_resume(void) cycle_now = tk->tkr_mono.read(clock); if ((clock->flags & CLOCK_SOURCE_SUSPEND_NONSTOP) && cycle_now > tk->tkr_mono.cycle_last) { - u64 num, max = ULLONG_MAX; - u32 mult = clock->mult; - u32 shift = clock->shift; - s64 nsec = 0; - - cycle_delta = clocksource_delta(cycle_now, tk->tkr_mono.cycle_last, - tk->tkr_mono.mask); - - /* - * "cycle_delta * mutl" may cause 64 bits overflow, if the - * suspended time is too long. In that case we need do the - * 64 bits math carefully - */ - do_div(max, mult); - if (cycle_delta > max) { - num = div64_u64(cycle_delta, max); - nsec = (((u64) max * mult) >> shift) * num; - cycle_delta -= num * max; - } - nsec += ((u64) cycle_delta * mult) >> shift; + u64 nsec, cyc_delta; + cyc_delta = clocksource_delta(cycle_now, tk->tkr_mono.cycle_last, + tk->tkr_mono.mask); + nsec = mul_u64_u32_shr(cyc_delta, clock->mult, clock->shift); ts_delta = ns_to_timespec64(nsec); sleeptime_injected = true; } else if (timespec64_compare(&ts_new, &timekeeping_suspend_time) > 0) { ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [patch 4/6] timekeeping: Use mul_u64_u32_shr() instead of open coding it 2016-12-08 20:49 ` [patch 4/6] timekeeping: Use mul_u64_u32_shr() instead of open coding it Thomas Gleixner @ 2016-12-08 23:41 ` David Gibson 2016-12-09 11:14 ` [tip:timers/core] " tip-bot for Thomas Gleixner 1 sibling, 0 replies; 35+ messages in thread From: David Gibson @ 2016-12-08 23:41 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, John Stultz, Peter Zijlstra, Ingo Molnar, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall [-- Attachment #1: Type: text/plain, Size: 2296 bytes --] On Thu, Dec 08, 2016 at 08:49:38PM -0000, Thomas Gleixner wrote: > The resume code must deal with a clocksource delta which is potentially big > enough to overflow the 64bit mult. > > Replace the open coded handling with the proper function. > > Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> > --- > kernel/time/timekeeping.c | 26 +++++--------------------- > 1 file changed, 5 insertions(+), 21 deletions(-) > > --- a/kernel/time/timekeeping.c > +++ b/kernel/time/timekeeping.c > @@ -1615,7 +1615,7 @@ void timekeeping_resume(void) > struct clocksource *clock = tk->tkr_mono.clock; > unsigned long flags; > struct timespec64 ts_new, ts_delta; > - cycle_t cycle_now, cycle_delta; > + cycle_t cycle_now; > > sleeptime_injected = false; > read_persistent_clock64(&ts_new); > @@ -1641,27 +1641,11 @@ void timekeeping_resume(void) > cycle_now = tk->tkr_mono.read(clock); > if ((clock->flags & CLOCK_SOURCE_SUSPEND_NONSTOP) && > cycle_now > tk->tkr_mono.cycle_last) { > - u64 num, max = ULLONG_MAX; > - u32 mult = clock->mult; > - u32 shift = clock->shift; > - s64 nsec = 0; > - > - cycle_delta = clocksource_delta(cycle_now, tk->tkr_mono.cycle_last, > - tk->tkr_mono.mask); > - > - /* > - * "cycle_delta * mutl" may cause 64 bits overflow, if the > - * suspended time is too long. In that case we need do the > - * 64 bits math carefully > - */ > - do_div(max, mult); > - if (cycle_delta > max) { > - num = div64_u64(cycle_delta, max); > - nsec = (((u64) max * mult) >> shift) * num; > - cycle_delta -= num * max; > - } > - nsec += ((u64) cycle_delta * mult) >> shift; > + u64 nsec, cyc_delta; > > + cyc_delta = clocksource_delta(cycle_now, tk->tkr_mono.cycle_last, > + tk->tkr_mono.mask); > + nsec = mul_u64_u32_shr(cyc_delta, clock->mult, clock->shift); > ts_delta = ns_to_timespec64(nsec); > sleeptime_injected = true; > } else if (timespec64_compare(&ts_new, &timekeeping_suspend_time) > 0) { > > -- David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 819 bytes --] ^ permalink raw reply [flat|nested] 35+ messages in thread
* [tip:timers/core] timekeeping: Use mul_u64_u32_shr() instead of open coding it 2016-12-08 20:49 ` [patch 4/6] timekeeping: Use mul_u64_u32_shr() instead of open coding it Thomas Gleixner 2016-12-08 23:41 ` David Gibson @ 2016-12-09 11:14 ` tip-bot for Thomas Gleixner 1 sibling, 0 replies; 35+ messages in thread From: tip-bot for Thomas Gleixner @ 2016-12-09 11:14 UTC (permalink / raw) To: linux-tip-commits Cc: cmetcalf, richardcochran, mingo, christopher.s.hall, lvivier, linux-kernel, tglx, liavr, peterz, prarit, hpa, john.stultz, david Commit-ID: c029a2bec66e42e57538cb65e28618baf6a4b311 Gitweb: http://git.kernel.org/tip/c029a2bec66e42e57538cb65e28618baf6a4b311 Author: Thomas Gleixner <tglx@linutronix.de> AuthorDate: Thu, 8 Dec 2016 20:49:38 +0000 Committer: Thomas Gleixner <tglx@linutronix.de> CommitDate: Fri, 9 Dec 2016 12:06:42 +0100 timekeeping: Use mul_u64_u32_shr() instead of open coding it The resume code must deal with a clocksource delta which is potentially big enough to overflow the 64bit mult. Replace the open coded handling with the proper function. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Parit Bhargava <prarit@redhat.com> Cc: Laurent Vivier <lvivier@redhat.com> Cc: "Christopher S. Hall" <christopher.s.hall@intel.com> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: Richard Cochran <richardcochran@gmail.com> Cc: Liav Rehana <liavr@mellanox.com> Cc: John Stultz <john.stultz@linaro.org> Link: http://lkml.kernel.org/r/20161208204228.921674404@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- kernel/time/timekeeping.c | 26 +++++--------------------- 1 file changed, 5 insertions(+), 21 deletions(-) diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c index 82e1b5c..da233cd 100644 --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -1644,7 +1644,7 @@ void timekeeping_resume(void) struct clocksource *clock = tk->tkr_mono.clock; unsigned long flags; struct timespec64 ts_new, ts_delta; - cycle_t cycle_now, cycle_delta; + cycle_t cycle_now; sleeptime_injected = false; read_persistent_clock64(&ts_new); @@ -1670,27 +1670,11 @@ void timekeeping_resume(void) cycle_now = tk->tkr_mono.read(clock); if ((clock->flags & CLOCK_SOURCE_SUSPEND_NONSTOP) && cycle_now > tk->tkr_mono.cycle_last) { - u64 num, max = ULLONG_MAX; - u32 mult = clock->mult; - u32 shift = clock->shift; - s64 nsec = 0; - - cycle_delta = clocksource_delta(cycle_now, tk->tkr_mono.cycle_last, - tk->tkr_mono.mask); - - /* - * "cycle_delta * mutl" may cause 64 bits overflow, if the - * suspended time is too long. In that case we need do the - * 64 bits math carefully - */ - do_div(max, mult); - if (cycle_delta > max) { - num = div64_u64(cycle_delta, max); - nsec = (((u64) max * mult) >> shift) * num; - cycle_delta -= num * max; - } - nsec += ((u64) cycle_delta * mult) >> shift; + u64 nsec, cyc_delta; + cyc_delta = clocksource_delta(cycle_now, tk->tkr_mono.cycle_last, + tk->tkr_mono.mask); + nsec = mul_u64_u32_shr(cyc_delta, clock->mult, clock->shift); ts_delta = ns_to_timespec64(nsec); sleeptime_injected = true; } else if (timespec64_compare(&ts_new, &timekeeping_suspend_time) > 0) { ^ permalink raw reply related [flat|nested] 35+ messages in thread
* [patch 5/6] [RFD] timekeeping: Provide optional 128bit math 2016-12-08 20:49 [patch 0/6] timekeeping: Cure the signed/unsigned wreckage Thomas Gleixner ` (3 preceding siblings ...) 2016-12-08 20:49 ` [patch 4/6] timekeeping: Use mul_u64_u32_shr() instead of open coding it Thomas Gleixner @ 2016-12-08 20:49 ` Thomas Gleixner 2016-12-09 4:08 ` Ingo Molnar ` (2 more replies) 2016-12-08 20:49 ` [patch 6/6] [RFD] timekeeping: Get rid of cycle_t Thomas Gleixner ` (2 subsequent siblings) 7 siblings, 3 replies; 35+ messages in thread From: Thomas Gleixner @ 2016-12-08 20:49 UTC (permalink / raw) To: LKML Cc: John Stultz, Peter Zijlstra, Ingo Molnar, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall [-- Attachment #1: timekeeping--Provide-optional-128bit-math.patch --] [-- Type: text/plain, Size: 2877 bytes --] If the timekeeping CPU is scheduled out long enough by a hypervisor the clocksource delta multiplication can overflow and as a result time can go backwards. That's insane to begin with, but people already triggered a signed multiplication overflow, so a unsigned overflow is not necessarily impossible. Implement optional 128bit math which can be selected by a config option. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- kernel/time/Kconfig | 15 +++++++++++++++ kernel/time/timekeeping.c | 38 ++++++++++++++++++++++++++++++++++++-- 2 files changed, 51 insertions(+), 2 deletions(-) --- a/kernel/time/Kconfig +++ b/kernel/time/Kconfig @@ -51,6 +51,21 @@ config GENERIC_CLOCKEVENTS_MIN_ADJUST config GENERIC_CMOS_UPDATE bool +config TIMEKEEPING_USE_128BIT_MATH + bool "Enable 128 bit math in the timekeeping hotpath" + default n + depends on !ARCH_USES_GETTIMEOFFSET && EXPERT + help + + If VMs get scheduled out for a long time then the clocksource + delta to nanoseconds conversion in timekeeping can overflow the + 64bit multiplication. As a result time going backwards might be + observed. + + Enable this only if you want to support insane setups with + massive overcommitment as this introduces overhead into the + timekeeping hotpath. + if GENERIC_CLOCKEVENTS menu "Timers subsystem" --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -298,8 +298,41 @@ u32 (*arch_gettimeoffset)(void) = defaul static inline u32 arch_gettimeoffset(void) { return 0; } #endif -static inline u64 timekeeping_delta_to_ns(struct tk_read_base *tkr, - cycle_t delta) +/* + * Enabled when timekeeping is supposed to deal with virtualization keeping + * VMs long enough scheduled out that the 64 * 32 bit multiplication in + * timekeeping_delta_to_ns() overflows 64bit. + */ +#ifdef CONFIG_TIMEKEEPING_USE_128BIT_MATH + +#if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__) +static inline u64 timekeeping_delta_to_ns(struct tk_read_base *tkr, u64 delta) +{ + unsigned __int128 nsec; + + nsec = ((unsigned __int128)delta * tkr->mult) + tkr->xtime_nsec; + return (u64) (nsec >> tkr->shift); +} +#else +static inline u64 timekeeping_delta_to_ns(struct tk_read_base *tkr, u64 delta) +{ + u32 dh, dl; + u64 nsec; + + dl = delta; + dh = delta >> 32; + + nsec = ((u64)dl * tkr->mult) + tkr->xtime_nsec; + nsec >>= tkr->shift; + if (unlikely(dh)) + nsec += ((u64)dh * tkr->mult) << (32 - tkr->shift); + return nsec; +} +#endif + +#else /* CONFIG_TIMEKEEPING_USE_128BIT_MATH */ + +static inline u64 timekeeping_delta_to_ns(struct tk_read_base *tkr, u64 delta) { u64 nsec; @@ -309,6 +342,7 @@ static inline u64 timekeeping_delta_to_n /* If arch requires, add in get_arch_timeoffset() */ return nsec + arch_gettimeoffset(); } +#endif static inline u64 timekeeping_get_ns(struct tk_read_base *tkr) { ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [patch 5/6] [RFD] timekeeping: Provide optional 128bit math 2016-12-08 20:49 ` [patch 5/6] [RFD] timekeeping: Provide optional 128bit math Thomas Gleixner @ 2016-12-09 4:08 ` Ingo Molnar 2016-12-09 4:29 ` Ingo Molnar 2016-12-09 4:48 ` Peter Zijlstra 2016-12-09 5:11 ` Peter Zijlstra 2016-12-09 5:26 ` Peter Zijlstra 2 siblings, 2 replies; 35+ messages in thread From: Ingo Molnar @ 2016-12-09 4:08 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, John Stultz, Peter Zijlstra, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall * Thomas Gleixner <tglx@linutronix.de> wrote: > If the timekeeping CPU is scheduled out long enough by a hypervisor the > clocksource delta multiplication can overflow and as a result time can go > backwards. That's insane to begin with, but people already triggered a > signed multiplication overflow, so a unsigned overflow is not necessarily > impossible. > > Implement optional 128bit math which can be selected by a config option. What's the rough VM interruption time that would trigger an overflow? Given that the clock shift tk_read_base::mult is often 1, isn't it 32-bit nsecs, i.e. 4 seconds? That doesn't sound 'insanely long'. Or some other value? > +#if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__) > +static inline u64 timekeeping_delta_to_ns(struct tk_read_base *tkr, u64 delta) > +{ > + unsigned __int128 nsec; > + > + nsec = ((unsigned __int128)delta * tkr->mult) + tkr->xtime_nsec; > + return (u64) (nsec >> tkr->shift); > +} > +#else > +static inline u64 timekeeping_delta_to_ns(struct tk_read_base *tkr, u64 delta) > +{ > + u32 dh, dl; > + u64 nsec; > + > + dl = delta; > + dh = delta >> 32; > + > + nsec = ((u64)dl * tkr->mult) + tkr->xtime_nsec; > + nsec >>= tkr->shift; > + if (unlikely(dh)) > + nsec += ((u64)dh * tkr->mult) << (32 - tkr->shift); > + return nsec; > +} > +#endif Actually, 128-bit multiplication shouldn't be too horrible - at least on 64-bit architectures. (128-bit division is another matter, but there's no division here.) So we might as well use this by default on 64-bit architectures that have 64-bit cycle counters - which the vast majority of hypervisors are. Assuming I'm correct that just 4 seconds of VM delay would make the whole logic unrobust. Thanks, Ingo ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [patch 5/6] [RFD] timekeeping: Provide optional 128bit math 2016-12-09 4:08 ` Ingo Molnar @ 2016-12-09 4:29 ` Ingo Molnar 2016-12-09 4:39 ` John Stultz 2016-12-09 4:48 ` Peter Zijlstra 1 sibling, 1 reply; 35+ messages in thread From: Ingo Molnar @ 2016-12-09 4:29 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, John Stultz, Peter Zijlstra, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall, Linus Torvalds * Ingo Molnar <mingo@kernel.org> wrote: > > * Thomas Gleixner <tglx@linutronix.de> wrote: > > > If the timekeeping CPU is scheduled out long enough by a hypervisor the > > clocksource delta multiplication can overflow and as a result time can go > > backwards. That's insane to begin with, but people already triggered a > > signed multiplication overflow, so a unsigned overflow is not necessarily > > impossible. > > > > Implement optional 128bit math which can be selected by a config option. > > What's the rough VM interruption time that would trigger an overflow? Given that > the clock shift tk_read_base::mult is often 1, isn't it 32-bit nsecs, i.e. 4 > seconds? > > That doesn't sound 'insanely long'. > > Or some other value? Ok, wasn't fully awake yet: more realistic values of the scaling factor on x86 would allow cycles input values of up to ~70 billion with 64-bit math, which would allow deltas of up to about 1 minute with 64-bit math. I think we should at least detect (and report?) the overflow and sanitize the effects to the max offset instead of generating random overflown values. That would also allow the 128-bit multiplication only be done in the rare case when we overflow. Which in turn could then be made unconditional. Am I missing something? Thanks, Ingo ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [patch 5/6] [RFD] timekeeping: Provide optional 128bit math 2016-12-09 4:29 ` Ingo Molnar @ 2016-12-09 4:39 ` John Stultz 0 siblings, 0 replies; 35+ messages in thread From: John Stultz @ 2016-12-09 4:39 UTC (permalink / raw) To: Ingo Molnar Cc: Thomas Gleixner, LKML, Peter Zijlstra, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall, Linus Torvalds On Thu, Dec 8, 2016 at 8:29 PM, Ingo Molnar <mingo@kernel.org> wrote: > > * Ingo Molnar <mingo@kernel.org> wrote: > >> >> * Thomas Gleixner <tglx@linutronix.de> wrote: >> >> > If the timekeeping CPU is scheduled out long enough by a hypervisor the >> > clocksource delta multiplication can overflow and as a result time can go >> > backwards. That's insane to begin with, but people already triggered a >> > signed multiplication overflow, so a unsigned overflow is not necessarily >> > impossible. >> > >> > Implement optional 128bit math which can be selected by a config option. >> >> What's the rough VM interruption time that would trigger an overflow? Given that >> the clock shift tk_read_base::mult is often 1, isn't it 32-bit nsecs, i.e. 4 >> seconds? >> >> That doesn't sound 'insanely long'. >> >> Or some other value? > > Ok, wasn't fully awake yet: more realistic values of the scaling factor on x86 > would allow cycles input values of up to ~70 billion with 64-bit math, which would > allow deltas of up to about 1 minute with 64-bit math. So if I'm remembering properly, we pick mult/shift pairs such that the mult shouldn't overflow from ~10 minutes worth of cycles. > I think we should at least detect (and report?) the overflow and sanitize the > effects to the max offset instead of generating random overflown values. So with CONFIG_DEBUG_TIMEKEEPING, we do check to see if the cycle value is larger then the max_cycles and will report a warning. But this is done at interrupt time and not in the hotpath. thanks -john ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [patch 5/6] [RFD] timekeeping: Provide optional 128bit math 2016-12-09 4:08 ` Ingo Molnar 2016-12-09 4:29 ` Ingo Molnar @ 2016-12-09 4:48 ` Peter Zijlstra 2016-12-09 5:22 ` Ingo Molnar 1 sibling, 1 reply; 35+ messages in thread From: Peter Zijlstra @ 2016-12-09 4:48 UTC (permalink / raw) To: Ingo Molnar Cc: Thomas Gleixner, LKML, John Stultz, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall On Fri, Dec 09, 2016 at 05:08:26AM +0100, Ingo Molnar wrote: > > +#if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__) > > +static inline u64 timekeeping_delta_to_ns(struct tk_read_base *tkr, u64 delta) > > +{ > > + unsigned __int128 nsec; > > + > > + nsec = ((unsigned __int128)delta * tkr->mult) + tkr->xtime_nsec; > > + return (u64) (nsec >> tkr->shift); > > +} > > Actually, 128-bit multiplication shouldn't be too horrible - at least on 64-bit > architectures. (128-bit division is another matter, but there's no division here.) IIRC there are 64bit architectures that do not have a 64x64->128 mult, only a 64x64->64 mult instruction. Its not immediately apparent using __int128 will generate optimal code for those, nor is it a given GCC will not require libgcc functions for those. ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [patch 5/6] [RFD] timekeeping: Provide optional 128bit math 2016-12-09 4:48 ` Peter Zijlstra @ 2016-12-09 5:22 ` Ingo Molnar 2016-12-09 5:41 ` Peter Zijlstra 0 siblings, 1 reply; 35+ messages in thread From: Ingo Molnar @ 2016-12-09 5:22 UTC (permalink / raw) To: Peter Zijlstra Cc: Thomas Gleixner, LKML, John Stultz, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall * Peter Zijlstra <peterz@infradead.org> wrote: > On Fri, Dec 09, 2016 at 05:08:26AM +0100, Ingo Molnar wrote: > > > +#if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__) > > > +static inline u64 timekeeping_delta_to_ns(struct tk_read_base *tkr, u64 delta) > > > +{ > > > + unsigned __int128 nsec; > > > + > > > + nsec = ((unsigned __int128)delta * tkr->mult) + tkr->xtime_nsec; > > > + return (u64) (nsec >> tkr->shift); > > > +} > > > > Actually, 128-bit multiplication shouldn't be too horrible - at least on 64-bit > > architectures. (128-bit division is another matter, but there's no division here.) > > IIRC there are 64bit architectures that do not have a 64x64->128 mult, > only a 64x64->64 mult instruction. Its not immediately apparent using > __int128 will generate optimal code for those, nor is it a given GCC > will not require libgcc functions for those. Well, if the overflow case is rare (which it is in this case) then it should still be relatively straightforward, something like: X and Y are 64-bit: X = Xh*2^32 + Xl Y = Yh*2^32 + Yl X*Y = (Xh*2^32 + Xl)*(Yh*2^32 + Yl) = Xh*2^32*(Yh*2^32 + Yl) + Xl*(Yh*2^32 + Yl) = Xh*Yh*2^64 + Xh*Yl*2^32 + Xl*Yh*2^32 + XL*Yl Which is four 32x32->64 multiplications in the worst case. Where a valid overflow threshold is relatively easy to determine in a hot path compatible fashion: if (Xh != 0 || Yh != 0) slow_path(); And this simple and fast overflow check should still cover the overwhelming majority of 'sane' systems. (A more involved 'could it overflow' check of counting the high bits with 8 bit granularity by looking at the high bytes not at the words could be done in the slow path - to still avoid the 4 multiplications in most cases.) Am I missing something? Thanks, Ingo ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [patch 5/6] [RFD] timekeeping: Provide optional 128bit math 2016-12-09 5:22 ` Ingo Molnar @ 2016-12-09 5:41 ` Peter Zijlstra 0 siblings, 0 replies; 35+ messages in thread From: Peter Zijlstra @ 2016-12-09 5:41 UTC (permalink / raw) To: Ingo Molnar Cc: Thomas Gleixner, LKML, John Stultz, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall On Fri, Dec 09, 2016 at 06:22:03AM +0100, Ingo Molnar wrote: > > * Peter Zijlstra <peterz@infradead.org> wrote: > > > On Fri, Dec 09, 2016 at 05:08:26AM +0100, Ingo Molnar wrote: > > > > +#if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__) > > > > +static inline u64 timekeeping_delta_to_ns(struct tk_read_base *tkr, u64 delta) > > > > +{ > > > > + unsigned __int128 nsec; > > > > + > > > > + nsec = ((unsigned __int128)delta * tkr->mult) + tkr->xtime_nsec; > > > > + return (u64) (nsec >> tkr->shift); > > > > +} > > > > > > Actually, 128-bit multiplication shouldn't be too horrible - at least on 64-bit > > > architectures. (128-bit division is another matter, but there's no division here.) > > > > IIRC there are 64bit architectures that do not have a 64x64->128 mult, > > only a 64x64->64 mult instruction. Its not immediately apparent using > > __int128 will generate optimal code for those, nor is it a given GCC > > will not require libgcc functions for those. > > Well, if the overflow case is rare (which it is in this case) then it should still > be relatively straightforward, something like: > > X and Y are 64-bit: > > X = Xh*2^32 + Xl > Y = Yh*2^32 + Yl > > X*Y = (Xh*2^32 + Xl)*(Yh*2^32 + Yl) > > = Xh*2^32*(Yh*2^32 + Yl) > + Xl*(Yh*2^32 + Yl) > > = Xh*Yh*2^64 > + Xh*Yl*2^32 > + Xl*Yh*2^32 > + XL*Yl > > Which is four 32x32->64 multiplications in the worst case. Yeah, that's the full 64x64->128 mult on 3bit. Luckily we only need 64x32->96, which reduces to 2 32x32->64 mults. But my point was that unconditionally using __int128 might not be the right thing. > Where a valid overflow threshold is relatively easy to determine in a hot path > compatible fashion: > > if (Xh != 0 || Yh != 0) > slow_path(); > > And this simple and fast overflow check should still cover the overwhelming > majority of 'sane' systems. (A more involved 'could it overflow' check of counting > the high bits with 8 bit granularity by looking at the high bytes not at the words > could be done in the slow path - to still avoid the 4 multiplications in most > cases.) > > Am I missing something? Yeah, the fact that we only need the 2 mults and that the fallback already does the second multiply conditionally :-) But then look at the email where I said that that condition actually makes the thing vastly more expensive on some archs (like tilegx). ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [patch 5/6] [RFD] timekeeping: Provide optional 128bit math 2016-12-08 20:49 ` [patch 5/6] [RFD] timekeeping: Provide optional 128bit math Thomas Gleixner 2016-12-09 4:08 ` Ingo Molnar @ 2016-12-09 5:11 ` Peter Zijlstra 2016-12-09 6:08 ` Peter Zijlstra 2016-12-09 5:26 ` Peter Zijlstra 2 siblings, 1 reply; 35+ messages in thread From: Peter Zijlstra @ 2016-12-09 5:11 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, John Stultz, Ingo Molnar, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall On Thu, Dec 08, 2016 at 08:49:39PM -0000, Thomas Gleixner wrote: > +/* > + * Enabled when timekeeping is supposed to deal with virtualization keeping > + * VMs long enough scheduled out that the 64 * 32 bit multiplication in > + * timekeeping_delta_to_ns() overflows 64bit. > + */ > +#ifdef CONFIG_TIMEKEEPING_USE_128BIT_MATH > + > +#if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__) > +static inline u64 timekeeping_delta_to_ns(struct tk_read_base *tkr, u64 delta) > +{ > + unsigned __int128 nsec; > + > + nsec = ((unsigned __int128)delta * tkr->mult) + tkr->xtime_nsec; > + return (u64) (nsec >> tkr->shift); > +} > +#else > +static inline u64 timekeeping_delta_to_ns(struct tk_read_base *tkr, u64 delta) > +{ > + u32 dh, dl; > + u64 nsec; > + > + dl = delta; > + dh = delta >> 32; > + > + nsec = ((u64)dl * tkr->mult) + tkr->xtime_nsec; > + nsec >>= tkr->shift; > + if (unlikely(dh)) > + nsec += ((u64)dh * tkr->mult) << (32 - tkr->shift); > + return nsec; > +} > +#endif > + > +#else /* CONFIG_TIMEKEEPING_USE_128BIT_MATH */ xtime_nsec confuses me, contrary to its name, its not actually in nsec, its in shifted nsec units for some reason (and that might well be a good reason, but I don't know). In any case, it needing to be inside the shift is somewhat unfortunate in that it doesn't allow you to use the existing mul_u64_u32_shr() ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [patch 5/6] [RFD] timekeeping: Provide optional 128bit math 2016-12-09 5:11 ` Peter Zijlstra @ 2016-12-09 6:08 ` Peter Zijlstra 0 siblings, 0 replies; 35+ messages in thread From: Peter Zijlstra @ 2016-12-09 6:08 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, John Stultz, Ingo Molnar, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall On Fri, Dec 09, 2016 at 06:11:17AM +0100, Peter Zijlstra wrote: > On Thu, Dec 08, 2016 at 08:49:39PM -0000, Thomas Gleixner wrote: > > > +/* > > + * Enabled when timekeeping is supposed to deal with virtualization keeping > > + * VMs long enough scheduled out that the 64 * 32 bit multiplication in > > + * timekeeping_delta_to_ns() overflows 64bit. > > + */ > > +#ifdef CONFIG_TIMEKEEPING_USE_128BIT_MATH > > + > > +#if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__) > > +static inline u64 timekeeping_delta_to_ns(struct tk_read_base *tkr, u64 delta) > > +{ > > + unsigned __int128 nsec; > > + > > + nsec = ((unsigned __int128)delta * tkr->mult) + tkr->xtime_nsec; > > + return (u64) (nsec >> tkr->shift); > > +} > > +#else > > +static inline u64 timekeeping_delta_to_ns(struct tk_read_base *tkr, u64 delta) > > +{ > > + u32 dh, dl; > > + u64 nsec; > > + > > + dl = delta; > > + dh = delta >> 32; > > + > > + nsec = ((u64)dl * tkr->mult) + tkr->xtime_nsec; > > + nsec >>= tkr->shift; > > + if (unlikely(dh)) > > + nsec += ((u64)dh * tkr->mult) << (32 - tkr->shift); > > + return nsec; > > +} > > +#endif > > + > > +#else /* CONFIG_TIMEKEEPING_USE_128BIT_MATH */ > > xtime_nsec confuses me, contrary to its name, its not actually in nsec, > its in shifted nsec units for some reason (and that might well be a good > reason, but I don't know). > > In any case, it needing to be inside the shift is somewhat unfortunate > in that it doesn't allow you to use the existing mul_u64_u32_shr() Wouldn't something like: nsec = mul_u64_u32_shr(delta, tkr->mult, tkr->shift); nsec += tkr->xtime_nsec >> tkr->shift; Be good enough? Sure you have a slight rounding error, which results in a few jaggies in the actual timeline, but it would still be monotonic. That is, we'll observe the ns rollover 'late', but given its ns, does anybody really care? ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [patch 5/6] [RFD] timekeeping: Provide optional 128bit math 2016-12-08 20:49 ` [patch 5/6] [RFD] timekeeping: Provide optional 128bit math Thomas Gleixner 2016-12-09 4:08 ` Ingo Molnar 2016-12-09 5:11 ` Peter Zijlstra @ 2016-12-09 5:26 ` Peter Zijlstra 2016-12-09 6:38 ` Peter Zijlstra 2 siblings, 1 reply; 35+ messages in thread From: Peter Zijlstra @ 2016-12-09 5:26 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, John Stultz, Ingo Molnar, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall On Thu, Dec 08, 2016 at 08:49:39PM -0000, Thomas Gleixner wrote: > +static inline u64 timekeeping_delta_to_ns(struct tk_read_base *tkr, u64 delta) > +{ > + u32 dh, dl; > + u64 nsec; > + > + dl = delta; > + dh = delta >> 32; > + > + nsec = ((u64)dl * tkr->mult) + tkr->xtime_nsec; > + nsec >>= tkr->shift; > + if (unlikely(dh)) > + nsec += ((u64)dh * tkr->mult) << (32 - tkr->shift); > + return nsec; > +} Just for giggles, on tilegx the branch is actually slower than doing the mult unconditionally. The problem is that the two multiplies would otherwise completely pipeline, whereas with the conditional you serialize them. (came to light while talking about why the mul_u64_u32_shr() fallback didn't work right for them, which was a combination of the above issue and the fact that their compiler 'lost' the fact that these are 32x32->64 mults and did 64x64 ones instead). ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [patch 5/6] [RFD] timekeeping: Provide optional 128bit math 2016-12-09 5:26 ` Peter Zijlstra @ 2016-12-09 6:38 ` Peter Zijlstra 2016-12-09 8:30 ` Peter Zijlstra 2016-12-09 10:18 ` [patch 5/6] [RFD] timekeeping: Provide optional 128bit math Peter Zijlstra 0 siblings, 2 replies; 35+ messages in thread From: Peter Zijlstra @ 2016-12-09 6:38 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, John Stultz, Ingo Molnar, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall On Fri, Dec 09, 2016 at 06:26:38AM +0100, Peter Zijlstra wrote: > On Thu, Dec 08, 2016 at 08:49:39PM -0000, Thomas Gleixner wrote: > > > +static inline u64 timekeeping_delta_to_ns(struct tk_read_base *tkr, u64 delta) > > +{ > > + u32 dh, dl; > > + u64 nsec; > > + > > + dl = delta; > > + dh = delta >> 32; > > + > > + nsec = ((u64)dl * tkr->mult) + tkr->xtime_nsec; > > + nsec >>= tkr->shift; > > + if (unlikely(dh)) > > + nsec += ((u64)dh * tkr->mult) << (32 - tkr->shift); > > + return nsec; > > +} > > Just for giggles, on tilegx the branch is actually slower than doing the > mult unconditionally. > > The problem is that the two multiplies would otherwise completely > pipeline, whereas with the conditional you serialize them. On my Haswell laptop the unconditional version is faster too. > (came to light while talking about why the mul_u64_u32_shr() fallback > didn't work right for them, which was a combination of the above issue > and the fact that their compiler 'lost' the fact that these are > 32x32->64 mults and did 64x64 ones instead). Turns out using GCC-6.2.1 we have the same problem on i386, GCC doesn't recognise the 32x32 mults and generates crap. This used to work :/ ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [patch 5/6] [RFD] timekeeping: Provide optional 128bit math 2016-12-09 6:38 ` Peter Zijlstra @ 2016-12-09 8:30 ` Peter Zijlstra 2016-12-09 9:11 ` Peter Zijlstra ` (3 more replies) 2016-12-09 10:18 ` [patch 5/6] [RFD] timekeeping: Provide optional 128bit math Peter Zijlstra 1 sibling, 4 replies; 35+ messages in thread From: Peter Zijlstra @ 2016-12-09 8:30 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, John Stultz, Ingo Molnar, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall On Fri, Dec 09, 2016 at 07:38:47AM +0100, Peter Zijlstra wrote: > On Fri, Dec 09, 2016 at 06:26:38AM +0100, Peter Zijlstra wrote: > > Just for giggles, on tilegx the branch is actually slower than doing the > > mult unconditionally. > > > > The problem is that the two multiplies would otherwise completely > > pipeline, whereas with the conditional you serialize them. > > On my Haswell laptop the unconditional version is faster too. Only when using x86_64 instructions, once I fixed the i386 variant it was slower, probably due to register pressure and the like. > > (came to light while talking about why the mul_u64_u32_shr() fallback > > didn't work right for them, which was a combination of the above issue > > and the fact that their compiler 'lost' the fact that these are > > 32x32->64 mults and did 64x64 ones instead). > > Turns out using GCC-6.2.1 we have the same problem on i386, GCC doesn't > recognise the 32x32 mults and generates crap. > > This used to work :/ Do we want something like so? --- arch/tile/include/asm/Kbuild | 1 - arch/tile/include/asm/div64.h | 14 ++++++++++++++ arch/x86/include/asm/div64.h | 10 ++++++++++ include/linux/math64.h | 26 ++++++++++++++++++-------- 4 files changed, 42 insertions(+), 9 deletions(-) diff --git a/arch/tile/include/asm/Kbuild b/arch/tile/include/asm/Kbuild index 2d1f5638974c..20f2ba6d79be 100644 --- a/arch/tile/include/asm/Kbuild +++ b/arch/tile/include/asm/Kbuild @@ -5,7 +5,6 @@ generic-y += bug.h generic-y += bugs.h generic-y += clkdev.h generic-y += cputime.h -generic-y += div64.h generic-y += emergency-restart.h generic-y += errno.h generic-y += exec.h diff --git a/arch/tile/include/asm/div64.h b/arch/tile/include/asm/div64.h index e69de29bb2d1..bf6161966dfa 100644 --- a/arch/tile/include/asm/div64.h +++ b/arch/tile/include/asm/div64.h @@ -0,0 +1,14 @@ +#ifndef _ASM_TILE_DIV64_H +#define _ASM_TILE_DIV64_H + +#ifdef __tilegx__ +static inline u64 mul_u32_u32(u32 a, u32 b) +{ + return __insn_mul_lu_lu(a, b); +} +#define mul_u32_u32 mul_u32_u32 +#endif + +#include <asm-generic/div64.h> + +#endif /* _ASM_TILE_DIV64_H */ diff --git a/arch/x86/include/asm/div64.h b/arch/x86/include/asm/div64.h index ced283ac79df..68f4ae5e8976 100644 --- a/arch/x86/include/asm/div64.h +++ b/arch/x86/include/asm/div64.h @@ -59,6 +59,16 @@ static inline u64 div_u64_rem(u64 dividend, u32 divisor, u32 *remainder) } #define div_u64_rem div_u64_rem +static inline u64 mul_u32_u32(u32 a, u32 b) +{ + u64 ret; + + asm ("mull %[b]" : "=A" (ret) : [a] "a" (a), [b] "g" (b) ); + + return ret; +} +#define mul_u32_u32 mul_u32_u32 + #else # include <asm-generic/div64.h> #endif /* CONFIG_X86_32 */ diff --git a/include/linux/math64.h b/include/linux/math64.h index 6e8b5b270ffe..80690c96c734 100644 --- a/include/linux/math64.h +++ b/include/linux/math64.h @@ -133,6 +133,16 @@ static inline s64 div_s64(s64 dividend, s32 divisor) return ret; } +#ifndef mul_u32_u32 +/* + * Many a GCC version messes this up and generates a 64x64 mult :-( + */ +static inline u64 mul_u32_u32(u32 a, u32 b) +{ + return (u64)a * b; +} +#endif + #if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__) #ifndef mul_u64_u32_shr @@ -160,9 +170,9 @@ static inline u64 mul_u64_u32_shr(u64 a, u32 mul, unsigned int shift) al = a; ah = a >> 32; - ret = ((u64)al * mul) >> shift; + ret = mul_u32_u32(al, mul) >> shift; if (ah) - ret += ((u64)ah * mul) << (32 - shift); + ret += mul_u32_u32(ah, mul) << (32 - shift); return ret; } @@ -186,10 +196,10 @@ static inline u64 mul_u64_u64_shr(u64 a, u64 b, unsigned int shift) a0.ll = a; b0.ll = b; - rl.ll = (u64)a0.l.low * b0.l.low; - rm.ll = (u64)a0.l.low * b0.l.high; - rn.ll = (u64)a0.l.high * b0.l.low; - rh.ll = (u64)a0.l.high * b0.l.high; + rl.ll = mul_u32_u32(a0.l.low, b0.l.low); + rm.ll = mul_u32_u32(a0.l.low, b0.l.high); + rn.ll = mul_u32_u32(a0.l.high, b0.l.low); + rh.ll = mul_u32_u32(a0.l.high, b0.l.high); /* * Each of these lines computes a 64-bit intermediate result into "c", @@ -229,8 +239,8 @@ static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor) } u, rl, rh; u.ll = a; - rl.ll = (u64)u.l.low * mul; - rh.ll = (u64)u.l.high * mul + rl.l.high; + rl.ll = mul_u32_u32(u.l.low, mul); + rh.ll = mul_u32_u32(u.l.high, mul) + rl.l.high; /* Bits 32-63 of the result will be in rh.l.low. */ rl.l.high = do_div(rh.ll, divisor); ^ permalink raw reply related [flat|nested] 35+ messages in thread
* Re: [patch 5/6] [RFD] timekeeping: Provide optional 128bit math 2016-12-09 8:30 ` Peter Zijlstra @ 2016-12-09 9:11 ` Peter Zijlstra 2016-12-09 10:01 ` Peter Zijlstra ` (2 subsequent siblings) 3 siblings, 0 replies; 35+ messages in thread From: Peter Zijlstra @ 2016-12-09 9:11 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, John Stultz, Ingo Molnar, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall On Fri, Dec 09, 2016 at 09:30:11AM +0100, Peter Zijlstra wrote: > > > Just for giggles, on tilegx the branch is actually slower than doing the > > > mult unconditionally. > > > > > > The problem is that the two multiplies would otherwise completely > > > pipeline, whereas with the conditional you serialize them. > Only when using x86_64 instructions, once I fixed the i386 variant it > was slower, probably due to register pressure and the like. OK, maybe I messed up on i386, although I've yet to try running that on an actual 32bit machine. I also need to dig up a small core, who knows what atoms do. Results are in cycles, average over 1e6 loops. I think the 128 results are around 1 cycle, measurements are maybe a tad wobbly because I compare against an empty loop to correct measurement overhead. root@ivb-ep:~/tmp# for i in -m64 -m32 -mx32; do echo $i ; gcc -O3 $i -o mult mult.c -lm; ./mult ; done -m64 cond: avg: 5.487738 +- 0.004152 uncond: avg: 4.495690 +- 0.006009 128: avg: 0.634496 +- 0.004795 -m32 cond: avg: 14.807630 +- 0.006890 uncond: avg: 11.601985 +- 0.009722 -mx32 cond: avg: 5.027696 +- 0.005766 uncond: avg: 4.038013 +- 0.008069 128: avg: 0.009928 +- 0.005730 root@hsw:~/tmp# for i in -m64 -m32 -mx32; do echo $i ; gcc -O3 $i -o mult mult.c -lm; ./mult ; done -m64 cond: avg: 1.998718 +- 0.008775 uncond: avg: 2.004795 +- 0.009865 128: avg: 0.991947 +- 0.007607 -m32 cond: avg: 12.981868 +- 0.011239 uncond: avg: 13.000566 +- 0.011668 -mx32 cond: avg: 2.005437 +- 0.006840 uncond: avg: 3.001631 +- 0.004786 128: avg: 1.990425 +- 0.003880 ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [patch 5/6] [RFD] timekeeping: Provide optional 128bit math 2016-12-09 8:30 ` Peter Zijlstra 2016-12-09 9:11 ` Peter Zijlstra @ 2016-12-09 10:01 ` Peter Zijlstra 2016-12-09 17:32 ` Chris Metcalf 2017-01-14 12:51 ` [tip:timers/core] math64, timers: Fix 32bit mul_u64_u32_shr() and friends tip-bot for Peter Zijlstra 3 siblings, 0 replies; 35+ messages in thread From: Peter Zijlstra @ 2016-12-09 10:01 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, John Stultz, Ingo Molnar, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall On Fri, Dec 09, 2016 at 09:30:11AM +0100, Peter Zijlstra wrote: > +static inline u64 mul_u32_u32(u32 a, u32 b) > +{ > + u64 ret; > + > + asm ("mull %[b]" : "=A" (ret) : [a] "a" (a), [b] "g" (b) ); > + > + return ret; > +} ARGH, that's broken on x86_64, it needs to be: u32 high, low; asm ("mull %[b]" : "=a" (low), "=d" (high) : [a] "a" (a), [b] "g" (b) ); return low | ((u64)high) << 32; The 'A' constraint doesn't work right. And with that all the benchmark results are borken too. root@ivb-ep:~/spinlocks# for i in -m64 -m32 -mx32 ; do echo $i; gcc -O3 $i -o mult mult.c -lm; ./mult; done -m64 cond: avg: 7.474872 +- 0.008302 uncond: avg: 9.116401 +- 0.008468 128: avg: 0.826584 +- 0.005514 -m32 cond: avg: 16.604030 +- 0.009808 uncond: avg: 13.115470 +- 0.004452 -mx32 cond: avg: 6.168156 +- 0.006650 uncond: avg: 7.202092 +- 0.006813 128: avg: 0.081809 +- 0.008440 ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [patch 5/6] [RFD] timekeeping: Provide optional 128bit math 2016-12-09 8:30 ` Peter Zijlstra 2016-12-09 9:11 ` Peter Zijlstra 2016-12-09 10:01 ` Peter Zijlstra @ 2016-12-09 17:32 ` Chris Metcalf 2017-01-14 12:51 ` [tip:timers/core] math64, timers: Fix 32bit mul_u64_u32_shr() and friends tip-bot for Peter Zijlstra 3 siblings, 0 replies; 35+ messages in thread From: Chris Metcalf @ 2016-12-09 17:32 UTC (permalink / raw) To: Peter Zijlstra, Thomas Gleixner Cc: LKML, John Stultz, Ingo Molnar, David Gibson, Liav Rehana, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall On 12/9/2016 3:30 AM, Peter Zijlstra wrote: > On Fri, Dec 09, 2016 at 07:38:47AM +0100, Peter Zijlstra wrote: >> On Fri, Dec 09, 2016 at 06:26:38AM +0100, Peter Zijlstra wrote: >>> Just for giggles, on tilegx the branch is actually slower than doing the >>> mult unconditionally. >>> >>> The problem is that the two multiplies would otherwise completely >>> pipeline, whereas with the conditional you serialize them. >> On my Haswell laptop the unconditional version is faster too. > Only when using x86_64 instructions, once I fixed the i386 variant it > was slower, probably due to register pressure and the like. > >>> (came to light while talking about why the mul_u64_u32_shr() fallback >>> didn't work right for them, which was a combination of the above issue >>> and the fact that their compiler 'lost' the fact that these are >>> 32x32->64 mults and did 64x64 ones instead). >> Turns out using GCC-6.2.1 we have the same problem on i386, GCC doesn't >> recognise the 32x32 mults and generates crap. >> >> This used to work :/ > Do we want something like so? > > --- > arch/tile/include/asm/Kbuild | 1 - > arch/tile/include/asm/div64.h | 14 ++++++++++++++ > arch/x86/include/asm/div64.h | 10 ++++++++++ > include/linux/math64.h | 26 ++++++++++++++++++-------- > 4 files changed, 42 insertions(+), 9 deletions(-) Untested, but I looked at it closely, and it seems like a decent idea. Acked-by: Chris Metcalf <cmetcalf@mellanox.com> [for tile] Of course if this is pushed up, it will then probably be too tempting for me not to add the tilegx-specific mul_u64_u32_shr() to take advantage of pipelining the two 32x32->64 multiplies :-) -- Chris Metcalf, Mellanox Technologies http://www.mellanox.com ^ permalink raw reply [flat|nested] 35+ messages in thread
* [tip:timers/core] math64, timers: Fix 32bit mul_u64_u32_shr() and friends 2016-12-09 8:30 ` Peter Zijlstra ` (2 preceding siblings ...) 2016-12-09 17:32 ` Chris Metcalf @ 2017-01-14 12:51 ` tip-bot for Peter Zijlstra 3 siblings, 0 replies; 35+ messages in thread From: tip-bot for Peter Zijlstra @ 2017-01-14 12:51 UTC (permalink / raw) To: linux-tip-commits Cc: hpa, christopher.s.hall, lvivier, peterz, cmetcalf, mingo, prarit, liavr, tglx, john.stultz, torvalds, richardcochran, linux-kernel, david Commit-ID: 9e3d6223d2093a8903c8f570a06284453ee59944 Gitweb: http://git.kernel.org/tip/9e3d6223d2093a8903c8f570a06284453ee59944 Author: Peter Zijlstra <peterz@infradead.org> AuthorDate: Fri, 9 Dec 2016 09:30:11 +0100 Committer: Ingo Molnar <mingo@kernel.org> CommitDate: Sat, 14 Jan 2017 11:31:50 +0100 math64, timers: Fix 32bit mul_u64_u32_shr() and friends It turns out that while GCC-4.4 manages to generate 32x32->64 mult instructions for the 32bit mul_u64_u32_shr() code, any GCC after that fails horribly. Fix this by providing an explicit mul_u32_u32() function which can be architcture provided. Reported-by: Chris Metcalf <cmetcalf@mellanox.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Chris Metcalf <cmetcalf@mellanox.com> [for tile] Cc: Christopher S. Hall <christopher.s.hall@intel.com> Cc: David Gibson <david@gibson.dropbear.id.au> Cc: John Stultz <john.stultz@linaro.org> Cc: Laurent Vivier <lvivier@redhat.com> Cc: Liav Rehana <liavr@mellanox.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Parit Bhargava <prarit@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Richard Cochran <richardcochran@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20161209083011.GD15765@worktop.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org> --- arch/tile/include/asm/Kbuild | 1 - arch/tile/include/asm/div64.h | 14 ++++++++++++++ arch/x86/include/asm/div64.h | 11 +++++++++++ include/linux/math64.h | 26 ++++++++++++++++++-------- 4 files changed, 43 insertions(+), 9 deletions(-) diff --git a/arch/tile/include/asm/Kbuild b/arch/tile/include/asm/Kbuild index 2d1f563..20f2ba6 100644 --- a/arch/tile/include/asm/Kbuild +++ b/arch/tile/include/asm/Kbuild @@ -5,7 +5,6 @@ generic-y += bug.h generic-y += bugs.h generic-y += clkdev.h generic-y += cputime.h -generic-y += div64.h generic-y += emergency-restart.h generic-y += errno.h generic-y += exec.h diff --git a/arch/tile/include/asm/div64.h b/arch/tile/include/asm/div64.h new file mode 100644 index 0000000..bf61619 --- /dev/null +++ b/arch/tile/include/asm/div64.h @@ -0,0 +1,14 @@ +#ifndef _ASM_TILE_DIV64_H +#define _ASM_TILE_DIV64_H + +#ifdef __tilegx__ +static inline u64 mul_u32_u32(u32 a, u32 b) +{ + return __insn_mul_lu_lu(a, b); +} +#define mul_u32_u32 mul_u32_u32 +#endif + +#include <asm-generic/div64.h> + +#endif /* _ASM_TILE_DIV64_H */ diff --git a/arch/x86/include/asm/div64.h b/arch/x86/include/asm/div64.h index ced283a..af95c47 100644 --- a/arch/x86/include/asm/div64.h +++ b/arch/x86/include/asm/div64.h @@ -59,6 +59,17 @@ static inline u64 div_u64_rem(u64 dividend, u32 divisor, u32 *remainder) } #define div_u64_rem div_u64_rem +static inline u64 mul_u32_u32(u32 a, u32 b) +{ + u32 high, low; + + asm ("mull %[b]" : "=a" (low), "=d" (high) + : [a] "a" (a), [b] "rm" (b) ); + + return low | ((u64)high) << 32; +} +#define mul_u32_u32 mul_u32_u32 + #else # include <asm-generic/div64.h> #endif /* CONFIG_X86_32 */ diff --git a/include/linux/math64.h b/include/linux/math64.h index 6e8b5b2..80690c9 100644 --- a/include/linux/math64.h +++ b/include/linux/math64.h @@ -133,6 +133,16 @@ __iter_div_u64_rem(u64 dividend, u32 divisor, u64 *remainder) return ret; } +#ifndef mul_u32_u32 +/* + * Many a GCC version messes this up and generates a 64x64 mult :-( + */ +static inline u64 mul_u32_u32(u32 a, u32 b) +{ + return (u64)a * b; +} +#endif + #if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__) #ifndef mul_u64_u32_shr @@ -160,9 +170,9 @@ static inline u64 mul_u64_u32_shr(u64 a, u32 mul, unsigned int shift) al = a; ah = a >> 32; - ret = ((u64)al * mul) >> shift; + ret = mul_u32_u32(al, mul) >> shift; if (ah) - ret += ((u64)ah * mul) << (32 - shift); + ret += mul_u32_u32(ah, mul) << (32 - shift); return ret; } @@ -186,10 +196,10 @@ static inline u64 mul_u64_u64_shr(u64 a, u64 b, unsigned int shift) a0.ll = a; b0.ll = b; - rl.ll = (u64)a0.l.low * b0.l.low; - rm.ll = (u64)a0.l.low * b0.l.high; - rn.ll = (u64)a0.l.high * b0.l.low; - rh.ll = (u64)a0.l.high * b0.l.high; + rl.ll = mul_u32_u32(a0.l.low, b0.l.low); + rm.ll = mul_u32_u32(a0.l.low, b0.l.high); + rn.ll = mul_u32_u32(a0.l.high, b0.l.low); + rh.ll = mul_u32_u32(a0.l.high, b0.l.high); /* * Each of these lines computes a 64-bit intermediate result into "c", @@ -229,8 +239,8 @@ static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor) } u, rl, rh; u.ll = a; - rl.ll = (u64)u.l.low * mul; - rh.ll = (u64)u.l.high * mul + rl.l.high; + rl.ll = mul_u32_u32(u.l.low, mul); + rh.ll = mul_u32_u32(u.l.high, mul) + rl.l.high; /* Bits 32-63 of the result will be in rh.l.low. */ rl.l.high = do_div(rh.ll, divisor); ^ permalink raw reply related [flat|nested] 35+ messages in thread
* Re: [patch 5/6] [RFD] timekeeping: Provide optional 128bit math 2016-12-09 6:38 ` Peter Zijlstra 2016-12-09 8:30 ` Peter Zijlstra @ 2016-12-09 10:18 ` Peter Zijlstra 2016-12-09 17:20 ` Chris Metcalf 1 sibling, 1 reply; 35+ messages in thread From: Peter Zijlstra @ 2016-12-09 10:18 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, John Stultz, Ingo Molnar, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall On Fri, Dec 09, 2016 at 07:38:47AM +0100, Peter Zijlstra wrote: > Turns out using GCC-6.2.1 we have the same problem on i386, GCC doesn't > recognise the 32x32 mults and generates crap. > > This used to work :/ I tried: gcc-4.4: good gcc-4.6, gcc-4.8, gcc-5.4, gcc-6.2: bad ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [patch 5/6] [RFD] timekeeping: Provide optional 128bit math 2016-12-09 10:18 ` [patch 5/6] [RFD] timekeeping: Provide optional 128bit math Peter Zijlstra @ 2016-12-09 17:20 ` Chris Metcalf 0 siblings, 0 replies; 35+ messages in thread From: Chris Metcalf @ 2016-12-09 17:20 UTC (permalink / raw) To: Peter Zijlstra, Thomas Gleixner Cc: LKML, John Stultz, Ingo Molnar, David Gibson, Liav Rehana, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall On 12/9/2016 5:18 AM, Peter Zijlstra wrote: > On Fri, Dec 09, 2016 at 07:38:47AM +0100, Peter Zijlstra wrote: > >> Turns out using GCC-6.2.1 we have the same problem on i386, GCC doesn't >> recognise the 32x32 mults and generates crap. >> >> This used to work :/ > I tried: > > gcc-4.4: good > gcc-4.6, gcc-4.8, gcc-5.4, gcc-6.2: bad I also found 4.4 was good on tilegx at recognizing the 32x32, and bad on the later versions I tested; I don't recall which specific later versions I tried, though. -- Chris Metcalf, Mellanox Technologies http://www.mellanox.com ^ permalink raw reply [flat|nested] 35+ messages in thread
* [patch 6/6] [RFD] timekeeping: Get rid of cycle_t 2016-12-08 20:49 [patch 0/6] timekeeping: Cure the signed/unsigned wreckage Thomas Gleixner ` (4 preceding siblings ...) 2016-12-08 20:49 ` [patch 5/6] [RFD] timekeeping: Provide optional 128bit math Thomas Gleixner @ 2016-12-08 20:49 ` Thomas Gleixner 2016-12-08 23:43 ` David Gibson 2016-12-09 4:52 ` [patch 0/6] timekeeping: Cure the signed/unsigned wreckage John Stultz 2016-12-09 5:30 ` Peter Zijlstra 7 siblings, 1 reply; 35+ messages in thread From: Thomas Gleixner @ 2016-12-08 20:49 UTC (permalink / raw) To: LKML Cc: John Stultz, Peter Zijlstra, Ingo Molnar, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall [-- Attachment #1: timekeeping--Get-rid-of-cycle_t.patch --] [-- Type: text/plain, Size: 89979 bytes --] Kill the ever confusing typedef and use u64. NOT FOR INCLUSION - Must be regenerated at some point via coccinelle Not-Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- arch/alpha/kernel/time.c | 4 - arch/arc/kernel/time.c | 12 ++-- arch/arm/mach-davinci/time.c | 2 arch/arm/mach-ep93xx/timer-ep93xx.c | 4 - arch/arm/mach-footbridge/dc21285-timer.c | 2 arch/arm/mach-ixp4xx/common.c | 2 arch/arm/mach-mmp/time.c | 2 arch/arm/mach-omap2/timer.c | 4 - arch/arm/plat-iop/time.c | 2 arch/avr32/kernel/time.c | 4 - arch/blackfin/kernel/time-ts.c | 4 - arch/c6x/kernel/time.c | 2 arch/hexagon/kernel/time.c | 4 - arch/ia64/kernel/cyclone.c | 4 - arch/ia64/kernel/fsyscall_gtod_data.h | 6 +- arch/ia64/kernel/time.c | 6 +- arch/ia64/sn/kernel/sn2/timer.c | 4 - arch/m68k/68000/timers.c | 2 arch/m68k/coldfire/dma_timer.c | 2 arch/m68k/coldfire/pit.c | 2 arch/m68k/coldfire/sltimers.c | 2 arch/m68k/coldfire/timers.c | 2 arch/microblaze/kernel/timer.c | 6 +- arch/mips/alchemy/common/time.c | 2 arch/mips/cavium-octeon/csrc-octeon.c | 2 arch/mips/jz4740/time.c | 2 arch/mips/kernel/cevt-txx9.c | 2 arch/mips/kernel/csrc-bcm1480.c | 4 - arch/mips/kernel/csrc-ioasic.c | 2 arch/mips/kernel/csrc-r4k.c | 2 arch/mips/kernel/csrc-sb1250.c | 4 - arch/mips/loongson32/common/time.c | 4 - arch/mips/loongson64/common/cs5536/cs5536_mfgpt.c | 4 - arch/mips/loongson64/loongson-3/hpet.c | 4 - arch/mips/mti-malta/malta-time.c | 2 arch/mips/netlogic/common/time.c | 4 - arch/mips/sgi-ip27/ip27-timer.c | 2 arch/mn10300/kernel/csrc-mn10300.c | 2 arch/nios2/kernel/time.c | 2 arch/openrisc/kernel/time.c | 4 - arch/parisc/kernel/time.c | 2 arch/powerpc/kernel/time.c | 14 ++--- arch/s390/kernel/time.c | 2 arch/sparc/kernel/time_32.c | 2 arch/sparc/kernel/time_64.c | 2 arch/um/kernel/time.c | 2 arch/unicore32/kernel/time.c | 2 arch/x86/entry/vdso/vclock_gettime.c | 8 +-- arch/x86/include/asm/kvm_host.h | 2 arch/x86/include/asm/pvclock.h | 6 +- arch/x86/include/asm/tsc.h | 2 arch/x86/include/asm/vgtod.h | 4 - arch/x86/kernel/apb_timer.c | 4 - arch/x86/kernel/cpu/mshyperv.c | 4 - arch/x86/kernel/hpet.c | 14 ++--- arch/x86/kernel/kvmclock.c | 10 ++-- arch/x86/kernel/pvclock.c | 4 - arch/x86/kernel/tsc.c | 6 +- arch/x86/kvm/x86.c | 14 ++--- arch/x86/lguest/boot.c | 2 arch/x86/platform/uv/uv_time.c | 8 +-- arch/x86/xen/time.c | 6 +- arch/x86/xen/xen-ops.h | 2 arch/xtensa/kernel/time.c | 4 - drivers/char/hpet.c | 4 - drivers/clocksource/acpi_pm.c | 14 ++--- drivers/clocksource/arm_arch_timer.c | 4 - drivers/clocksource/arm_global_timer.c | 2 drivers/clocksource/cadence_ttc_timer.c | 4 - drivers/clocksource/clksrc-dbx500-prcmu.c | 2 drivers/clocksource/dw_apb_timer.c | 8 +-- drivers/clocksource/em_sti.c | 12 ++-- drivers/clocksource/exynos_mct.c | 6 +- drivers/clocksource/h8300_timer16.c | 2 drivers/clocksource/h8300_tpu.c | 2 drivers/clocksource/i8253.c | 4 - drivers/clocksource/jcore-pit.c | 2 drivers/clocksource/metag_generic.c | 2 drivers/clocksource/mips-gic-timer.c | 2 drivers/clocksource/mmio.c | 18 +++---- drivers/clocksource/mxs_timer.c | 2 drivers/clocksource/qcom-timer.c | 2 drivers/clocksource/samsung_pwm_timer.c | 2 drivers/clocksource/scx200_hrt.c | 4 - drivers/clocksource/sh_cmt.c | 2 drivers/clocksource/sh_tmu.c | 2 drivers/clocksource/tcb_clksrc.c | 4 - drivers/clocksource/time-pistachio.c | 4 - drivers/clocksource/timer-atlas7.c | 2 drivers/clocksource/timer-atmel-pit.c | 2 drivers/clocksource/timer-atmel-st.c | 2 drivers/clocksource/timer-nps.c | 4 - drivers/clocksource/timer-prima2.c | 2 drivers/clocksource/timer-sun5i.c | 2 drivers/clocksource/timer-ti-32k.c | 4 - drivers/clocksource/vt8500_timer.c | 4 - drivers/hv/hv.c | 8 +-- drivers/irqchip/irq-mips-gic.c | 16 +++--- drivers/net/ethernet/amd/xgbe/xgbe-ptp.c | 2 drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c | 2 drivers/net/ethernet/freescale/fec_ptp.c | 2 drivers/net/ethernet/intel/e1000e/netdev.c | 18 +++---- drivers/net/ethernet/intel/e1000e/ptp.c | 4 - drivers/net/ethernet/intel/igb/igb_ptp.c | 4 - drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c | 4 - drivers/net/ethernet/mellanox/mlx4/en_clock.c | 2 drivers/net/ethernet/mellanox/mlx4/main.c | 4 - drivers/net/ethernet/mellanox/mlx5/core/en_clock.c | 2 drivers/net/ethernet/mellanox/mlx5/core/main.c | 4 - drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h | 2 drivers/net/ethernet/ti/cpts.c | 2 include/kvm/arm_arch_timer.h | 4 - include/linux/clocksource.h | 22 ++++---- include/linux/dw_apb_timer.h | 2 include/linux/irqchip/mips-gic.h | 8 +-- include/linux/mlx4/device.h | 2 include/linux/timecounter.h | 12 ++-- include/linux/timekeeper_internal.h | 10 ++-- include/linux/timekeeping.h | 4 - include/linux/types.h | 3 - kernel/time/clocksource.c | 2 kernel/time/jiffies.c | 4 - kernel/time/timecounter.c | 6 +- kernel/time/timekeeping.c | 50 ++++++++++---------- kernel/time/timekeeping_internal.h | 6 +- kernel/trace/ftrace.c | 4 - kernel/trace/trace.c | 6 +- kernel/trace/trace.h | 8 +-- kernel/trace/trace_irqsoff.c | 4 - kernel/trace/trace_sched_wakeup.c | 4 - sound/hda/hdac_stream.c | 6 +- virt/kvm/arm/arch_timer.c | 6 +- 132 files changed, 318 insertions(+), 321 deletions(-) --- a/arch/alpha/kernel/time.c +++ b/arch/alpha/kernel/time.c @@ -133,7 +133,7 @@ init_rtc_clockevent(void) * The QEMU clock as a clocksource primitive. */ -static cycle_t +static u64 qemu_cs_read(struct clocksource *cs) { return qemu_get_vmtime(); @@ -260,7 +260,7 @@ common_init_rtc(void) * use this method when WTINT is in use. */ -static cycle_t read_rpcc(struct clocksource *cs) +static u64 read_rpcc(struct clocksource *cs) { return rpcc(); } --- a/arch/arc/kernel/time.c +++ b/arch/arc/kernel/time.c @@ -83,7 +83,7 @@ static int noinline arc_get_timer_clk(st #ifdef CONFIG_ARC_HAS_GFRC -static cycle_t arc_read_gfrc(struct clocksource *cs) +static u64 arc_read_gfrc(struct clocksource *cs) { unsigned long flags; union { @@ -92,7 +92,7 @@ static cycle_t arc_read_gfrc(struct cloc #else struct { u32 l, h; }; #endif - cycle_t full; + u64 full; } stamp; local_irq_save(flags); @@ -140,7 +140,7 @@ CLOCKSOURCE_OF_DECLARE(arc_gfrc, "snps,a #define AUX_RTC_LOW 0x104 #define AUX_RTC_HIGH 0x105 -static cycle_t arc_read_rtc(struct clocksource *cs) +static u64 arc_read_rtc(struct clocksource *cs) { unsigned long status; union { @@ -149,7 +149,7 @@ static cycle_t arc_read_rtc(struct clock #else struct { u32 low, high; }; #endif - cycle_t full; + u64 full; } stamp; /* @@ -203,9 +203,9 @@ CLOCKSOURCE_OF_DECLARE(arc_rtc, "snps,ar * 32bit TIMER1 to keep counting monotonically and wraparound */ -static cycle_t arc_read_timer1(struct clocksource *cs) +static u64 arc_read_timer1(struct clocksource *cs) { - return (cycle_t) read_aux_reg(ARC_REG_TIMER1_CNT); + return (u64) read_aux_reg(ARC_REG_TIMER1_CNT); } static struct clocksource arc_counter_timer1 = { --- a/arch/arm/mach-davinci/time.c +++ b/arch/arm/mach-davinci/time.c @@ -268,7 +268,7 @@ static void __init timer_init(void) /* * clocksource */ -static cycle_t read_cycles(struct clocksource *cs) +static u64 read_cycles(struct clocksource *cs) { struct timer_s *t = &timers[TID_CLOCKSOURCE]; --- a/arch/arm/mach-ep93xx/timer-ep93xx.c +++ b/arch/arm/mach-ep93xx/timer-ep93xx.c @@ -59,13 +59,13 @@ static u64 notrace ep93xx_read_sched_clo return ret; } -cycle_t ep93xx_clocksource_read(struct clocksource *c) +u64 ep93xx_clocksource_read(struct clocksource *c) { u64 ret; ret = readl(EP93XX_TIMER4_VALUE_LOW); ret |= ((u64) (readl(EP93XX_TIMER4_VALUE_HIGH) & 0xff) << 32); - return (cycle_t) ret; + return ret; } static int ep93xx_clkevt_set_next_event(unsigned long next, --- a/arch/arm/mach-footbridge/dc21285-timer.c +++ b/arch/arm/mach-footbridge/dc21285-timer.c @@ -19,7 +19,7 @@ #include "common.h" -static cycle_t cksrc_dc21285_read(struct clocksource *cs) +static u64 cksrc_dc21285_read(struct clocksource *cs) { return cs->mask - *CSR_TIMER2_VALUE; } --- a/arch/arm/mach-ixp4xx/common.c +++ b/arch/arm/mach-ixp4xx/common.c @@ -493,7 +493,7 @@ static u64 notrace ixp4xx_read_sched_clo * clocksource */ -static cycle_t ixp4xx_clocksource_read(struct clocksource *c) +static u64 ixp4xx_clocksource_read(struct clocksource *c) { return *IXP4XX_OSTS; } --- a/arch/arm/mach-mmp/time.c +++ b/arch/arm/mach-mmp/time.c @@ -144,7 +144,7 @@ static struct clock_event_device ckevt = .set_state_oneshot = timer_set_shutdown, }; -static cycle_t clksrc_read(struct clocksource *cs) +static u64 clksrc_read(struct clocksource *cs) { return timer_read(); } --- a/arch/arm/mach-omap2/timer.c +++ b/arch/arm/mach-omap2/timer.c @@ -369,9 +369,9 @@ static bool use_gptimer_clksrc __initdat /* * clocksource */ -static cycle_t clocksource_read_cycles(struct clocksource *cs) +static u64 clocksource_read_cycles(struct clocksource *cs) { - return (cycle_t)__omap_dm_timer_read_counter(&clksrc, + return (u64)__omap_dm_timer_read_counter(&clksrc, OMAP_TIMER_NONPOSTED); } --- a/arch/arm/plat-iop/time.c +++ b/arch/arm/plat-iop/time.c @@ -38,7 +38,7 @@ /* * IOP clocksource (free-running timer 1). */ -static cycle_t notrace iop_clocksource_read(struct clocksource *unused) +static u64 notrace iop_clocksource_read(struct clocksource *unused) { return 0xffffffffu - read_tcr1(); } --- a/arch/avr32/kernel/time.c +++ b/arch/avr32/kernel/time.c @@ -20,9 +20,9 @@ static bool disable_cpu_idle_poll; -static cycle_t read_cycle_count(struct clocksource *cs) +static u64 read_cycle_count(struct clocksource *cs) { - return (cycle_t)sysreg_read(COUNT); + return (u64)sysreg_read(COUNT); } /* --- a/arch/blackfin/kernel/time-ts.c +++ b/arch/blackfin/kernel/time-ts.c @@ -26,7 +26,7 @@ #if defined(CONFIG_CYCLES_CLOCKSOURCE) -static notrace cycle_t bfin_read_cycles(struct clocksource *cs) +static notrace u64 bfin_read_cycles(struct clocksource *cs) { #ifdef CONFIG_CPU_FREQ return __bfin_cycles_off + (get_cycles() << __bfin_cycles_mod); @@ -80,7 +80,7 @@ void __init setup_gptimer0(void) enable_gptimers(TIMER0bit); } -static cycle_t bfin_read_gptimer0(struct clocksource *cs) +static u64 bfin_read_gptimer0(struct clocksource *cs) { return bfin_read_TIMER0_COUNTER(); } --- a/arch/c6x/kernel/time.c +++ b/arch/c6x/kernel/time.c @@ -26,7 +26,7 @@ static u32 sched_clock_multiplier; #define SCHED_CLOCK_SHIFT 16 -static cycle_t tsc_read(struct clocksource *cs) +static u64 tsc_read(struct clocksource *cs) { return get_cycles(); } --- a/arch/hexagon/kernel/time.c +++ b/arch/hexagon/kernel/time.c @@ -72,9 +72,9 @@ struct adsp_hw_timer_struct { /* Look for "TCX0" for related constants. */ static __iomem struct adsp_hw_timer_struct *rtos_timer; -static cycle_t timer_get_cycles(struct clocksource *cs) +static u64 timer_get_cycles(struct clocksource *cs) { - return (cycle_t) __vmgettime(); + return (u64) __vmgettime(); } static struct clocksource hexagon_clocksource = { --- a/arch/ia64/kernel/cyclone.c +++ b/arch/ia64/kernel/cyclone.c @@ -21,9 +21,9 @@ void __init cyclone_setup(void) static void __iomem *cyclone_mc; -static cycle_t read_cyclone(struct clocksource *cs) +static u64 read_cyclone(struct clocksource *cs) { - return (cycle_t)readq((void __iomem *)cyclone_mc); + return (u64)readq((void __iomem *)cyclone_mc); } static struct clocksource clocksource_cyclone = { --- a/arch/ia64/kernel/fsyscall_gtod_data.h +++ b/arch/ia64/kernel/fsyscall_gtod_data.h @@ -9,15 +9,15 @@ struct fsyscall_gtod_data_t { seqcount_t seq; struct timespec wall_time; struct timespec monotonic_time; - cycle_t clk_mask; + u64 clk_mask; u32 clk_mult; u32 clk_shift; void *clk_fsys_mmio; - cycle_t clk_cycle_last; + u64 clk_cycle_last; } ____cacheline_aligned; struct itc_jitter_data_t { int itc_jitter; - cycle_t itc_lastcycle; + u64 itc_lastcycle; } ____cacheline_aligned; --- a/arch/ia64/kernel/time.c +++ b/arch/ia64/kernel/time.c @@ -31,7 +31,7 @@ #include "fsyscall_gtod_data.h" -static cycle_t itc_get_cycles(struct clocksource *cs); +static u64 itc_get_cycles(struct clocksource *cs); struct fsyscall_gtod_data_t fsyscall_gtod_data; @@ -323,7 +323,7 @@ void ia64_init_itm(void) } } -static cycle_t itc_get_cycles(struct clocksource *cs) +static u64 itc_get_cycles(struct clocksource *cs) { unsigned long lcycle, now, ret; @@ -397,7 +397,7 @@ void update_vsyscall_tz(void) } void update_vsyscall_old(struct timespec *wall, struct timespec *wtm, - struct clocksource *c, u32 mult, cycle_t cycle_last) + struct clocksource *c, u32 mult, u64 cycle_last) { write_seqcount_begin(&fsyscall_gtod_data.seq); --- a/arch/ia64/sn/kernel/sn2/timer.c +++ b/arch/ia64/sn/kernel/sn2/timer.c @@ -22,9 +22,9 @@ extern unsigned long sn_rtc_cycles_per_second; -static cycle_t read_sn2(struct clocksource *cs) +static u64 read_sn2(struct clocksource *cs) { - return (cycle_t)readq(RTC_COUNTER_ADDR); + return (u64)readq(RTC_COUNTER_ADDR); } static struct clocksource clocksource_sn2 = { --- a/arch/m68k/68000/timers.c +++ b/arch/m68k/68000/timers.c @@ -76,7 +76,7 @@ static struct irqaction m68328_timer_irq /***************************************************************************/ -static cycle_t m68328_read_clk(struct clocksource *cs) +static u64 m68328_read_clk(struct clocksource *cs) { unsigned long flags; u32 cycles; --- a/arch/m68k/coldfire/dma_timer.c +++ b/arch/m68k/coldfire/dma_timer.c @@ -34,7 +34,7 @@ #define DMA_DTMR_CLK_DIV_16 (2 << 1) #define DMA_DTMR_ENABLE (1 << 0) -static cycle_t cf_dt_get_cycles(struct clocksource *cs) +static u64 cf_dt_get_cycles(struct clocksource *cs) { return __raw_readl(DTCN0); } --- a/arch/m68k/coldfire/pit.c +++ b/arch/m68k/coldfire/pit.c @@ -118,7 +118,7 @@ static struct irqaction pit_irq = { /***************************************************************************/ -static cycle_t pit_read_clk(struct clocksource *cs) +static u64 pit_read_clk(struct clocksource *cs) { unsigned long flags; u32 cycles; --- a/arch/m68k/coldfire/sltimers.c +++ b/arch/m68k/coldfire/sltimers.c @@ -97,7 +97,7 @@ static struct irqaction mcfslt_timer_irq .handler = mcfslt_tick, }; -static cycle_t mcfslt_read_clk(struct clocksource *cs) +static u64 mcfslt_read_clk(struct clocksource *cs) { unsigned long flags; u32 cycles, scnt; --- a/arch/m68k/coldfire/timers.c +++ b/arch/m68k/coldfire/timers.c @@ -89,7 +89,7 @@ static struct irqaction mcftmr_timer_irq /***************************************************************************/ -static cycle_t mcftmr_read_clk(struct clocksource *cs) +static u64 mcftmr_read_clk(struct clocksource *cs) { unsigned long flags; u32 cycles; --- a/arch/microblaze/kernel/timer.c +++ b/arch/microblaze/kernel/timer.c @@ -190,17 +190,17 @@ static u64 xilinx_clock_read(void) return read_fn(timer_baseaddr + TCR1); } -static cycle_t xilinx_read(struct clocksource *cs) +static u64 xilinx_read(struct clocksource *cs) { /* reading actual value of timer 1 */ - return (cycle_t)xilinx_clock_read(); + return (u64)xilinx_clock_read(); } static struct timecounter xilinx_tc = { .cc = NULL, }; -static cycle_t xilinx_cc_read(const struct cyclecounter *cc) +static u64 xilinx_cc_read(const struct cyclecounter *cc) { return xilinx_read(NULL); } --- a/arch/mips/alchemy/common/time.c +++ b/arch/mips/alchemy/common/time.c @@ -44,7 +44,7 @@ /* 32kHz clock enabled and detected */ #define CNTR_OK (SYS_CNTRL_E0 | SYS_CNTRL_32S) -static cycle_t au1x_counter1_read(struct clocksource *cs) +static u64 au1x_counter1_read(struct clocksource *cs) { return alchemy_rdsys(AU1000_SYS_RTCREAD); } --- a/arch/mips/cavium-octeon/csrc-octeon.c +++ b/arch/mips/cavium-octeon/csrc-octeon.c @@ -98,7 +98,7 @@ void octeon_init_cvmcount(void) local_irq_restore(flags); } -static cycle_t octeon_cvmcount_read(struct clocksource *cs) +static u64 octeon_cvmcount_read(struct clocksource *cs) { return read_c0_cvmcount(); } --- a/arch/mips/jz4740/time.c +++ b/arch/mips/jz4740/time.c @@ -34,7 +34,7 @@ static uint16_t jz4740_jiffies_per_tick; -static cycle_t jz4740_clocksource_read(struct clocksource *cs) +static u64 jz4740_clocksource_read(struct clocksource *cs) { return jz4740_timer_get_count(TIMER_CLOCKSOURCE); } --- a/arch/mips/kernel/cevt-txx9.c +++ b/arch/mips/kernel/cevt-txx9.c @@ -27,7 +27,7 @@ struct txx9_clocksource { struct txx9_tmr_reg __iomem *tmrptr; }; -static cycle_t txx9_cs_read(struct clocksource *cs) +static u64 txx9_cs_read(struct clocksource *cs) { struct txx9_clocksource *txx9_cs = container_of(cs, struct txx9_clocksource, cs); --- a/arch/mips/kernel/csrc-bcm1480.c +++ b/arch/mips/kernel/csrc-bcm1480.c @@ -25,9 +25,9 @@ #include <asm/sibyte/sb1250.h> -static cycle_t bcm1480_hpt_read(struct clocksource *cs) +static u64 bcm1480_hpt_read(struct clocksource *cs) { - return (cycle_t) __raw_readq(IOADDR(A_SCD_ZBBUS_CYCLE_COUNT)); + return (u64) __raw_readq(IOADDR(A_SCD_ZBBUS_CYCLE_COUNT)); } struct clocksource bcm1480_clocksource = { --- a/arch/mips/kernel/csrc-ioasic.c +++ b/arch/mips/kernel/csrc-ioasic.c @@ -22,7 +22,7 @@ #include <asm/dec/ioasic.h> #include <asm/dec/ioasic_addrs.h> -static cycle_t dec_ioasic_hpt_read(struct clocksource *cs) +static u64 dec_ioasic_hpt_read(struct clocksource *cs) { return ioasic_read(IO_REG_FCTR); } --- a/arch/mips/kernel/csrc-r4k.c +++ b/arch/mips/kernel/csrc-r4k.c @@ -11,7 +11,7 @@ #include <asm/time.h> -static cycle_t c0_hpt_read(struct clocksource *cs) +static u64 c0_hpt_read(struct clocksource *cs) { return read_c0_count(); } --- a/arch/mips/kernel/csrc-sb1250.c +++ b/arch/mips/kernel/csrc-sb1250.c @@ -30,7 +30,7 @@ * The HPT is free running from SB1250_HPT_VALUE down to 0 then starts over * again. */ -static inline cycle_t sb1250_hpt_get_cycles(void) +static inline u64 sb1250_hpt_get_cycles(void) { unsigned int count; void __iomem *addr; @@ -41,7 +41,7 @@ static inline cycle_t sb1250_hpt_get_cyc return SB1250_HPT_VALUE - count; } -static cycle_t sb1250_hpt_read(struct clocksource *cs) +static u64 sb1250_hpt_read(struct clocksource *cs) { return sb1250_hpt_get_cycles(); } --- a/arch/mips/loongson32/common/time.c +++ b/arch/mips/loongson32/common/time.c @@ -63,7 +63,7 @@ void __init ls1x_pwmtimer_init(void) ls1x_pwmtimer_restart(); } -static cycle_t ls1x_clocksource_read(struct clocksource *cs) +static u64 ls1x_clocksource_read(struct clocksource *cs) { unsigned long flags; int count; @@ -107,7 +107,7 @@ static cycle_t ls1x_clocksource_read(str raw_spin_unlock_irqrestore(&ls1x_timer_lock, flags); - return (cycle_t) (jifs * ls1x_jiffies_per_tick) + count; + return (u64) (jifs * ls1x_jiffies_per_tick) + count; } static struct clocksource ls1x_clocksource = { --- a/arch/mips/loongson64/common/cs5536/cs5536_mfgpt.c +++ b/arch/mips/loongson64/common/cs5536/cs5536_mfgpt.c @@ -144,7 +144,7 @@ void __init setup_mfgpt0_timer(void) * to just read by itself. So use jiffies to emulate a free * running counter: */ -static cycle_t mfgpt_read(struct clocksource *cs) +static u64 mfgpt_read(struct clocksource *cs) { unsigned long flags; int count; @@ -188,7 +188,7 @@ static cycle_t mfgpt_read(struct clockso raw_spin_unlock_irqrestore(&mfgpt_lock, flags); - return (cycle_t) (jifs * COMPARE) + count; + return (u64) (jifs * COMPARE) + count; } static struct clocksource clocksource_mfgpt = { --- a/arch/mips/loongson64/loongson-3/hpet.c +++ b/arch/mips/loongson64/loongson-3/hpet.c @@ -248,9 +248,9 @@ void __init setup_hpet_timer(void) pr_info("hpet clock event device register\n"); } -static cycle_t hpet_read_counter(struct clocksource *cs) +static u64 hpet_read_counter(struct clocksource *cs) { - return (cycle_t)hpet_read(HPET_COUNTER); + return (u64)hpet_read(HPET_COUNTER); } static void hpet_suspend(struct clocksource *cs) --- a/arch/mips/mti-malta/malta-time.c +++ b/arch/mips/mti-malta/malta-time.c @@ -75,7 +75,7 @@ static void __init estimate_frequencies( unsigned int count, start; unsigned char secs1, secs2, ctrl; int secs; - cycle_t giccount = 0, gicstart = 0; + u64 giccount = 0, gicstart = 0; #if defined(CONFIG_KVM_GUEST) && CONFIG_KVM_GUEST_TIMER_FREQ mips_hpt_frequency = CONFIG_KVM_GUEST_TIMER_FREQ * 1000000; --- a/arch/mips/netlogic/common/time.c +++ b/arch/mips/netlogic/common/time.c @@ -59,14 +59,14 @@ unsigned int get_c0_compare_int(void) return IRQ_TIMER; } -static cycle_t nlm_get_pic_timer(struct clocksource *cs) +static u64 nlm_get_pic_timer(struct clocksource *cs) { uint64_t picbase = nlm_get_node(0)->picbase; return ~nlm_pic_read_timer(picbase, PIC_CLOCK_TIMER); } -static cycle_t nlm_get_pic_timer32(struct clocksource *cs) +static u64 nlm_get_pic_timer32(struct clocksource *cs) { uint64_t picbase = nlm_get_node(0)->picbase; --- a/arch/mips/sgi-ip27/ip27-timer.c +++ b/arch/mips/sgi-ip27/ip27-timer.c @@ -140,7 +140,7 @@ static void __init hub_rt_clock_event_gl setup_irq(irq, &hub_rt_irqaction); } -static cycle_t hub_rt_read(struct clocksource *cs) +static u64 hub_rt_read(struct clocksource *cs) { return REMOTE_HUB_L(cputonasid(0), PI_RT_COUNT); } --- a/arch/mn10300/kernel/csrc-mn10300.c +++ b/arch/mn10300/kernel/csrc-mn10300.c @@ -13,7 +13,7 @@ #include <asm/timex.h> #include "internal.h" -static cycle_t mn10300_read(struct clocksource *cs) +static u64 mn10300_read(struct clocksource *cs) { return read_timestamp_counter(); } --- a/arch/nios2/kernel/time.c +++ b/arch/nios2/kernel/time.c @@ -81,7 +81,7 @@ static inline unsigned long read_timersn return count; } -static cycle_t nios2_timer_read(struct clocksource *cs) +static u64 nios2_timer_read(struct clocksource *cs) { struct nios2_clocksource *nios2_cs = to_nios2_clksource(cs); unsigned long flags; --- a/arch/openrisc/kernel/time.c +++ b/arch/openrisc/kernel/time.c @@ -117,9 +117,9 @@ static __init void openrisc_clockevent_i * is 32 bits wide and runs at the CPU clock frequency. */ -static cycle_t openrisc_timer_read(struct clocksource *cs) +static u64 openrisc_timer_read(struct clocksource *cs) { - return (cycle_t) mfspr(SPR_TTCR); + return (u64) mfspr(SPR_TTCR); } static struct clocksource openrisc_timer = { --- a/arch/parisc/kernel/time.c +++ b/arch/parisc/kernel/time.c @@ -191,7 +191,7 @@ EXPORT_SYMBOL(profile_pc); /* clock source code */ -static cycle_t notrace read_cr16(struct clocksource *cs) +static u64 notrace read_cr16(struct clocksource *cs) { return get_cycles(); } --- a/arch/powerpc/kernel/time.c +++ b/arch/powerpc/kernel/time.c @@ -80,7 +80,7 @@ #include <linux/clockchips.h> #include <linux/timekeeper_internal.h> -static cycle_t rtc_read(struct clocksource *); +static u64 rtc_read(struct clocksource *); static struct clocksource clocksource_rtc = { .name = "rtc", .rating = 400, @@ -89,7 +89,7 @@ static struct clocksource clocksource_rt .read = rtc_read, }; -static cycle_t timebase_read(struct clocksource *); +static u64 timebase_read(struct clocksource *); static struct clocksource clocksource_timebase = { .name = "timebase", .rating = 400, @@ -802,18 +802,18 @@ void read_persistent_clock(struct timesp } /* clocksource code */ -static cycle_t rtc_read(struct clocksource *cs) +static u64 rtc_read(struct clocksource *cs) { - return (cycle_t)get_rtc(); + return (u64)get_rtc(); } -static cycle_t timebase_read(struct clocksource *cs) +static u64 timebase_read(struct clocksource *cs) { - return (cycle_t)get_tb(); + return (u64)get_tb(); } void update_vsyscall_old(struct timespec *wall_time, struct timespec *wtm, - struct clocksource *clock, u32 mult, cycle_t cycle_last) + struct clocksource *clock, u32 mult, u64 cycle_last) { u64 new_tb_to_xs, new_stamp_xsec; u32 frac_sec; --- a/arch/s390/kernel/time.c +++ b/arch/s390/kernel/time.c @@ -213,7 +213,7 @@ void read_boot_clock64(struct timespec64 tod_to_timeval(clock - TOD_UNIX_EPOCH, ts); } -static cycle_t read_tod_clock(struct clocksource *cs) +static u64 read_tod_clock(struct clocksource *cs) { return get_tod_clock(); } --- a/arch/sparc/kernel/time_32.c +++ b/arch/sparc/kernel/time_32.c @@ -148,7 +148,7 @@ static unsigned int sbus_cycles_offset(v return offset; } -static cycle_t timer_cs_read(struct clocksource *cs) +static u64 timer_cs_read(struct clocksource *cs) { unsigned int seq, offset; u64 cycles; --- a/arch/sparc/kernel/time_64.c +++ b/arch/sparc/kernel/time_64.c @@ -770,7 +770,7 @@ void udelay(unsigned long usecs) } EXPORT_SYMBOL(udelay); -static cycle_t clocksource_tick_read(struct clocksource *cs) +static u64 clocksource_tick_read(struct clocksource *cs) { return tick_ops->get_tick(); } --- a/arch/um/kernel/time.c +++ b/arch/um/kernel/time.c @@ -83,7 +83,7 @@ static irqreturn_t um_timer(int irq, voi return IRQ_HANDLED; } -static cycle_t timer_read(struct clocksource *cs) +static u64 timer_read(struct clocksource *cs) { return os_nsecs() / TIMER_MULTIPLIER; } --- a/arch/unicore32/kernel/time.c +++ b/arch/unicore32/kernel/time.c @@ -62,7 +62,7 @@ static struct clock_event_device ckevt_p .set_state_oneshot = puv3_osmr0_shutdown, }; -static cycle_t puv3_read_oscr(struct clocksource *cs) +static u64 puv3_read_oscr(struct clocksource *cs) { return readl(OST_OSCR); } --- a/arch/x86/entry/vdso/vclock_gettime.c +++ b/arch/x86/entry/vdso/vclock_gettime.c @@ -92,10 +92,10 @@ static notrace const struct pvclock_vsys return (const struct pvclock_vsyscall_time_info *)&pvclock_page; } -static notrace cycle_t vread_pvclock(int *mode) +static notrace u64 vread_pvclock(int *mode) { const struct pvclock_vcpu_time_info *pvti = &get_pvti0()->pvti; - cycle_t ret; + u64 ret; u64 last; u32 version; @@ -142,9 +142,9 @@ static notrace cycle_t vread_pvclock(int } #endif -notrace static cycle_t vread_tsc(void) +notrace static u64 vread_tsc(void) { - cycle_t ret = (cycle_t)rdtsc_ordered(); + u64 ret = (u64)rdtsc_ordered(); u64 last = gtod->cycle_last; if (likely(ret >= last)) --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -758,7 +758,7 @@ struct kvm_arch { spinlock_t pvclock_gtod_sync_lock; bool use_master_clock; u64 master_kernel_ns; - cycle_t master_cycle_now; + u64 master_cycle_now; struct delayed_work kvmclock_update_work; struct delayed_work kvmclock_sync_work; --- a/arch/x86/include/asm/pvclock.h +++ b/arch/x86/include/asm/pvclock.h @@ -14,7 +14,7 @@ static inline struct pvclock_vsyscall_ti #endif /* some helper functions for xen and kvm pv clock sources */ -cycle_t pvclock_clocksource_read(struct pvclock_vcpu_time_info *src); +u64 pvclock_clocksource_read(struct pvclock_vcpu_time_info *src); u8 pvclock_read_flags(struct pvclock_vcpu_time_info *src); void pvclock_set_flags(u8 flags); unsigned long pvclock_tsc_khz(struct pvclock_vcpu_time_info *src); @@ -87,11 +87,11 @@ static inline u64 pvclock_scale_delta(u6 } static __always_inline -cycle_t __pvclock_read_cycles(const struct pvclock_vcpu_time_info *src, +u64 __pvclock_read_cycles(const struct pvclock_vcpu_time_info *src, u64 tsc) { u64 delta = tsc - src->tsc_timestamp; - cycle_t offset = pvclock_scale_delta(delta, src->tsc_to_system_mul, + u64 offset = pvclock_scale_delta(delta, src->tsc_to_system_mul, src->tsc_shift); return src->system_time + offset; } --- a/arch/x86/include/asm/tsc.h +++ b/arch/x86/include/asm/tsc.h @@ -29,7 +29,7 @@ static inline cycles_t get_cycles(void) return rdtsc(); } -extern struct system_counterval_t convert_art_to_tsc(cycle_t art); +extern struct system_counterval_t convert_art_to_tsc(u64 art); extern void tsc_init(void); extern void mark_tsc_unstable(char *reason); --- a/arch/x86/include/asm/vgtod.h +++ b/arch/x86/include/asm/vgtod.h @@ -17,8 +17,8 @@ struct vsyscall_gtod_data { unsigned seq; int vclock_mode; - cycle_t cycle_last; - cycle_t mask; + u64 cycle_last; + u64 mask; u32 mult; u32 shift; --- a/arch/x86/kernel/apb_timer.c +++ b/arch/x86/kernel/apb_timer.c @@ -247,7 +247,7 @@ void apbt_setup_secondary_clock(void) {} static int apbt_clocksource_register(void) { u64 start, now; - cycle_t t1; + u64 t1; /* Start the counter, use timer 2 as source, timer 0/1 for event */ dw_apb_clocksource_start(clocksource_apbt); @@ -355,7 +355,7 @@ unsigned long apbt_quick_calibrate(void) { int i, scale; u64 old, new; - cycle_t t1, t2; + u64 t1, t2; unsigned long khz = 0; u32 loop, shift; --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -133,9 +133,9 @@ static uint32_t __init ms_hyperv_platfo return 0; } -static cycle_t read_hv_clock(struct clocksource *arg) +static u64 read_hv_clock(struct clocksource *arg) { - cycle_t current_tick; + u64 current_tick; /* * Read the partition counter to get the current tick count. This count * is set to 0 when the partition is created and is incremented in --- a/arch/x86/kernel/hpet.c +++ b/arch/x86/kernel/hpet.c @@ -791,7 +791,7 @@ static union hpet_lock hpet __cacheline_ { .lock = __ARCH_SPIN_LOCK_UNLOCKED, }, }; -static cycle_t read_hpet(struct clocksource *cs) +static u64 read_hpet(struct clocksource *cs) { unsigned long flags; union hpet_lock old, new; @@ -802,7 +802,7 @@ static cycle_t read_hpet(struct clocksou * Read HPET directly if in NMI. */ if (in_nmi()) - return (cycle_t)hpet_readl(HPET_COUNTER); + return (u64)hpet_readl(HPET_COUNTER); /* * Read the current state of the lock and HPET value atomically. @@ -821,7 +821,7 @@ static cycle_t read_hpet(struct clocksou WRITE_ONCE(hpet.value, new.value); arch_spin_unlock(&hpet.lock); local_irq_restore(flags); - return (cycle_t)new.value; + return (u64)new.value; } local_irq_restore(flags); @@ -843,15 +843,15 @@ static cycle_t read_hpet(struct clocksou new.lockval = READ_ONCE(hpet.lockval); } while ((new.value == old.value) && arch_spin_is_locked(&new.lock)); - return (cycle_t)new.value; + return (u64)new.value; } #else /* * For UP or 32-bit. */ -static cycle_t read_hpet(struct clocksource *cs) +static u64 read_hpet(struct clocksource *cs) { - return (cycle_t)hpet_readl(HPET_COUNTER); + return (u64)hpet_readl(HPET_COUNTER); } #endif @@ -867,7 +867,7 @@ static struct clocksource clocksource_hp static int hpet_clocksource_register(void) { u64 start, now; - cycle_t t1; + u64 t1; /* Start the counter */ hpet_restart_counter(); --- a/arch/x86/kernel/kvmclock.c +++ b/arch/x86/kernel/kvmclock.c @@ -32,7 +32,7 @@ static int kvmclock __ro_after_init = 1; static int msr_kvm_system_time = MSR_KVM_SYSTEM_TIME; static int msr_kvm_wall_clock = MSR_KVM_WALL_CLOCK; -static cycle_t kvm_sched_clock_offset; +static u64 kvm_sched_clock_offset; static int parse_no_kvmclock(char *arg) { @@ -79,10 +79,10 @@ static int kvm_set_wallclock(const struc return -1; } -static cycle_t kvm_clock_read(void) +static u64 kvm_clock_read(void) { struct pvclock_vcpu_time_info *src; - cycle_t ret; + u64 ret; int cpu; preempt_disable_notrace(); @@ -93,12 +93,12 @@ static cycle_t kvm_clock_read(void) return ret; } -static cycle_t kvm_clock_get_cycles(struct clocksource *cs) +static u64 kvm_clock_get_cycles(struct clocksource *cs) { return kvm_clock_read(); } -static cycle_t kvm_sched_clock_read(void) +static u64 kvm_sched_clock_read(void) { return kvm_clock_read() - kvm_sched_clock_offset; } --- a/arch/x86/kernel/pvclock.c +++ b/arch/x86/kernel/pvclock.c @@ -71,10 +71,10 @@ u8 pvclock_read_flags(struct pvclock_vcp return flags & valid_flags; } -cycle_t pvclock_clocksource_read(struct pvclock_vcpu_time_info *src) +u64 pvclock_clocksource_read(struct pvclock_vcpu_time_info *src) { unsigned version; - cycle_t ret; + u64 ret; u64 last; u8 flags; --- a/arch/x86/kernel/tsc.c +++ b/arch/x86/kernel/tsc.c @@ -1080,9 +1080,9 @@ static struct clocksource clocksource_ts * checking the result of read_tsc() - cycle_last for being negative. * That works because CLOCKSOURCE_MASK(64) does not mask out any bit. */ -static cycle_t read_tsc(struct clocksource *cs) +static u64 read_tsc(struct clocksource *cs) { - return (cycle_t)rdtsc_ordered(); + return (u64)rdtsc_ordered(); } /* @@ -1170,7 +1170,7 @@ int unsynchronized_tsc(void) /* * Convert ART to TSC given numerator/denominator found in detect_art() */ -struct system_counterval_t convert_art_to_tsc(cycle_t art) +struct system_counterval_t convert_art_to_tsc(u64 art) { u64 tmp, res, rem; --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1128,8 +1128,8 @@ struct pvclock_gtod_data { struct { /* extract of a clocksource struct */ int vclock_mode; - cycle_t cycle_last; - cycle_t mask; + u64 cycle_last; + u64 mask; u32 mult; u32 shift; } clock; @@ -1569,9 +1569,9 @@ static inline void adjust_tsc_offset_hos #ifdef CONFIG_X86_64 -static cycle_t read_tsc(void) +static u64 read_tsc(void) { - cycle_t ret = (cycle_t)rdtsc_ordered(); + u64 ret = (u64)rdtsc_ordered(); u64 last = pvclock_gtod_data.clock.cycle_last; if (likely(ret >= last)) @@ -1589,7 +1589,7 @@ static cycle_t read_tsc(void) return last; } -static inline u64 vgettsc(cycle_t *cycle_now) +static inline u64 vgettsc(u64 *cycle_now) { long v; struct pvclock_gtod_data *gtod = &pvclock_gtod_data; @@ -1600,7 +1600,7 @@ static inline u64 vgettsc(cycle_t *cycle return v * gtod->clock.mult; } -static int do_monotonic_boot(s64 *t, cycle_t *cycle_now) +static int do_monotonic_boot(s64 *t, u64 *cycle_now) { struct pvclock_gtod_data *gtod = &pvclock_gtod_data; unsigned long seq; @@ -1621,7 +1621,7 @@ static int do_monotonic_boot(s64 *t, cyc } /* returns true if host is using tsc clocksource */ -static bool kvm_get_time_and_clockread(s64 *kernel_ns, cycle_t *cycle_now) +static bool kvm_get_time_and_clockread(s64 *kernel_ns, u64 *cycle_now) { /* checked again under seqlock below */ if (pvclock_gtod_data.clock.vclock_mode != VCLOCK_TSC) --- a/arch/x86/lguest/boot.c +++ b/arch/x86/lguest/boot.c @@ -930,7 +930,7 @@ static unsigned long lguest_tsc_khz(void * If we can't use the TSC, the kernel falls back to our lower-priority * "lguest_clock", where we read the time value given to us by the Host. */ -static cycle_t lguest_clock_read(struct clocksource *cs) +static u64 lguest_clock_read(struct clocksource *cs) { unsigned long sec, nsec; --- a/arch/x86/platform/uv/uv_time.c +++ b/arch/x86/platform/uv/uv_time.c @@ -30,7 +30,7 @@ #define RTC_NAME "sgi_rtc" -static cycle_t uv_read_rtc(struct clocksource *cs); +static u64 uv_read_rtc(struct clocksource *cs); static int uv_rtc_next_event(unsigned long, struct clock_event_device *); static int uv_rtc_shutdown(struct clock_event_device *evt); @@ -38,7 +38,7 @@ static struct clocksource clocksource_uv .name = RTC_NAME, .rating = 299, .read = uv_read_rtc, - .mask = (cycle_t)UVH_RTC_REAL_TIME_CLOCK_MASK, + .mask = (u64)UVH_RTC_REAL_TIME_CLOCK_MASK, .flags = CLOCK_SOURCE_IS_CONTINUOUS, }; @@ -296,7 +296,7 @@ static int uv_rtc_unset_timer(int cpu, i * cachelines of it's own page. This allows faster simultaneous reads * from a given socket. */ -static cycle_t uv_read_rtc(struct clocksource *cs) +static u64 uv_read_rtc(struct clocksource *cs) { unsigned long offset; @@ -305,7 +305,7 @@ static cycle_t uv_read_rtc(struct clocks else offset = (uv_blade_processor_id() * L1_CACHE_BYTES) % PAGE_SIZE; - return (cycle_t)uv_read_local_mmr(UVH_RTC | offset); + return (u64)uv_read_local_mmr(UVH_RTC | offset); } /* --- a/arch/x86/xen/time.c +++ b/arch/x86/xen/time.c @@ -39,10 +39,10 @@ static unsigned long xen_tsc_khz(void) return pvclock_tsc_khz(info); } -cycle_t xen_clocksource_read(void) +u64 xen_clocksource_read(void) { struct pvclock_vcpu_time_info *src; - cycle_t ret; + u64 ret; preempt_disable_notrace(); src = &__this_cpu_read(xen_vcpu)->time; @@ -51,7 +51,7 @@ cycle_t xen_clocksource_read(void) return ret; } -static cycle_t xen_clocksource_get_cycles(struct clocksource *cs) +static u64 xen_clocksource_get_cycles(struct clocksource *cs) { return xen_clocksource_read(); } --- a/arch/x86/xen/xen-ops.h +++ b/arch/x86/xen/xen-ops.h @@ -67,7 +67,7 @@ void xen_init_irq_ops(void); void xen_setup_timer(int cpu); void xen_setup_runstate_info(int cpu); void xen_teardown_timer(int cpu); -cycle_t xen_clocksource_read(void); +u64 xen_clocksource_read(void); void xen_setup_cpu_clockevents(void); void __init xen_init_time_ops(void); void __init xen_hvm_init_time_ops(void); --- a/arch/xtensa/kernel/time.c +++ b/arch/xtensa/kernel/time.c @@ -34,9 +34,9 @@ unsigned long ccount_freq; /* ccount Hz */ EXPORT_SYMBOL(ccount_freq); -static cycle_t ccount_read(struct clocksource *cs) +static u64 ccount_read(struct clocksource *cs) { - return (cycle_t)get_ccount(); + return (u64)get_ccount(); } static u64 notrace ccount_sched_clock_read(void) --- a/drivers/char/hpet.c +++ b/drivers/char/hpet.c @@ -69,9 +69,9 @@ static u32 hpet_nhpet, hpet_max_freq = H #ifdef CONFIG_IA64 static void __iomem *hpet_mctr; -static cycle_t read_hpet(struct clocksource *cs) +static u64 read_hpet(struct clocksource *cs) { - return (cycle_t)read_counter((void __iomem *)hpet_mctr); + return (u64)read_counter((void __iomem *)hpet_mctr); } static struct clocksource clocksource_hpet = { --- a/drivers/clocksource/acpi_pm.c +++ b/drivers/clocksource/acpi_pm.c @@ -58,16 +58,16 @@ u32 acpi_pm_read_verified(void) return v2; } -static cycle_t acpi_pm_read(struct clocksource *cs) +static u64 acpi_pm_read(struct clocksource *cs) { - return (cycle_t)read_pmtmr(); + return (u64)read_pmtmr(); } static struct clocksource clocksource_acpi_pm = { .name = "acpi_pm", .rating = 200, .read = acpi_pm_read, - .mask = (cycle_t)ACPI_PM_MASK, + .mask = (u64)ACPI_PM_MASK, .flags = CLOCK_SOURCE_IS_CONTINUOUS, }; @@ -81,9 +81,9 @@ static int __init acpi_pm_good_setup(cha } __setup("acpi_pm_good", acpi_pm_good_setup); -static cycle_t acpi_pm_read_slow(struct clocksource *cs) +static u64 acpi_pm_read_slow(struct clocksource *cs) { - return (cycle_t)acpi_pm_read_verified(); + return (u64)acpi_pm_read_verified(); } static inline void acpi_pm_need_workaround(void) @@ -145,7 +145,7 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SE */ static int verify_pmtmr_rate(void) { - cycle_t value1, value2; + u64 value1, value2; unsigned long count, delta; mach_prepare_counter(); @@ -175,7 +175,7 @@ static int verify_pmtmr_rate(void) static int __init init_acpi_pm_clocksource(void) { - cycle_t value1, value2; + u64 value1, value2; unsigned int i, j = 0; if (!pmtmr_ioport) --- a/drivers/clocksource/arm_arch_timer.c +++ b/drivers/clocksource/arm_arch_timer.c @@ -561,12 +561,12 @@ static u64 arch_counter_get_cntvct_mem(v */ u64 (*arch_timer_read_counter)(void) = arch_counter_get_cntvct; -static cycle_t arch_counter_read(struct clocksource *cs) +static u64 arch_counter_read(struct clocksource *cs) { return arch_timer_read_counter(); } -static cycle_t arch_counter_read_cc(const struct cyclecounter *cc) +static u64 arch_counter_read_cc(const struct cyclecounter *cc) { return arch_timer_read_counter(); } --- a/drivers/clocksource/arm_global_timer.c +++ b/drivers/clocksource/arm_global_timer.c @@ -195,7 +195,7 @@ static int gt_dying_cpu(unsigned int cpu return 0; } -static cycle_t gt_clocksource_read(struct clocksource *cs) +static u64 gt_clocksource_read(struct clocksource *cs) { return gt_counter_read(); } --- a/drivers/clocksource/cadence_ttc_timer.c +++ b/drivers/clocksource/cadence_ttc_timer.c @@ -158,11 +158,11 @@ static irqreturn_t ttc_clock_event_inter * * returns: Current timer counter register value **/ -static cycle_t __ttc_clocksource_read(struct clocksource *cs) +static u64 __ttc_clocksource_read(struct clocksource *cs) { struct ttc_timer *timer = &to_ttc_timer_clksrc(cs)->ttc; - return (cycle_t)readl_relaxed(timer->base_addr + + return (u64)readl_relaxed(timer->base_addr + TTC_COUNT_VAL_OFFSET); } --- a/drivers/clocksource/clksrc-dbx500-prcmu.c +++ b/drivers/clocksource/clksrc-dbx500-prcmu.c @@ -30,7 +30,7 @@ static void __iomem *clksrc_dbx500_timer_base; -static cycle_t notrace clksrc_dbx500_prcmu_read(struct clocksource *cs) +static u64 notrace clksrc_dbx500_prcmu_read(struct clocksource *cs) { void __iomem *base = clksrc_dbx500_timer_base; u32 count, count2; --- a/drivers/clocksource/dw_apb_timer.c +++ b/drivers/clocksource/dw_apb_timer.c @@ -348,7 +348,7 @@ void dw_apb_clocksource_start(struct dw_ dw_apb_clocksource_read(dw_cs); } -static cycle_t __apbt_read_clocksource(struct clocksource *cs) +static u64 __apbt_read_clocksource(struct clocksource *cs) { u32 current_count; struct dw_apb_clocksource *dw_cs = @@ -357,7 +357,7 @@ static cycle_t __apbt_read_clocksource(s current_count = apbt_readl_relaxed(&dw_cs->timer, APBTMR_N_CURRENT_VALUE); - return (cycle_t)~current_count; + return (u64)~current_count; } static void apbt_restart_clocksource(struct clocksource *cs) @@ -416,7 +416,7 @@ void dw_apb_clocksource_register(struct * * @dw_cs: The clocksource to read. */ -cycle_t dw_apb_clocksource_read(struct dw_apb_clocksource *dw_cs) +u64 dw_apb_clocksource_read(struct dw_apb_clocksource *dw_cs) { - return (cycle_t)~apbt_readl(&dw_cs->timer, APBTMR_N_CURRENT_VALUE); + return (u64)~apbt_readl(&dw_cs->timer, APBTMR_N_CURRENT_VALUE); } --- a/drivers/clocksource/em_sti.c +++ b/drivers/clocksource/em_sti.c @@ -110,9 +110,9 @@ static void em_sti_disable(struct em_sti clk_disable_unprepare(p->clk); } -static cycle_t em_sti_count(struct em_sti_priv *p) +static u64 em_sti_count(struct em_sti_priv *p) { - cycle_t ticks; + u64 ticks; unsigned long flags; /* the STI hardware buffers the 48-bit count, but to @@ -121,14 +121,14 @@ static cycle_t em_sti_count(struct em_st * Always read STI_COUNT_H before STI_COUNT_L. */ raw_spin_lock_irqsave(&p->lock, flags); - ticks = (cycle_t)(em_sti_read(p, STI_COUNT_H) & 0xffff) << 32; + ticks = (u64)(em_sti_read(p, STI_COUNT_H) & 0xffff) << 32; ticks |= em_sti_read(p, STI_COUNT_L); raw_spin_unlock_irqrestore(&p->lock, flags); return ticks; } -static cycle_t em_sti_set_next(struct em_sti_priv *p, cycle_t next) +static u64 em_sti_set_next(struct em_sti_priv *p, u64 next) { unsigned long flags; @@ -198,7 +198,7 @@ static struct em_sti_priv *cs_to_em_sti( return container_of(cs, struct em_sti_priv, cs); } -static cycle_t em_sti_clocksource_read(struct clocksource *cs) +static u64 em_sti_clocksource_read(struct clocksource *cs) { return em_sti_count(cs_to_em_sti(cs)); } @@ -271,7 +271,7 @@ static int em_sti_clock_event_next(unsig struct clock_event_device *ced) { struct em_sti_priv *p = ced_to_em_sti(ced); - cycle_t next; + u64 next; int safe; next = em_sti_set_next(p, em_sti_count(p) + delta); --- a/drivers/clocksource/exynos_mct.c +++ b/drivers/clocksource/exynos_mct.c @@ -183,7 +183,7 @@ static u64 exynos4_read_count_64(void) hi2 = readl_relaxed(reg_base + EXYNOS4_MCT_G_CNT_U); } while (hi != hi2); - return ((cycle_t)hi << 32) | lo; + return ((u64)hi << 32) | lo; } /** @@ -199,7 +199,7 @@ static u32 notrace exynos4_read_count_32 return readl_relaxed(reg_base + EXYNOS4_MCT_G_CNT_L); } -static cycle_t exynos4_frc_read(struct clocksource *cs) +static u64 exynos4_frc_read(struct clocksource *cs) { return exynos4_read_count_32(); } @@ -266,7 +266,7 @@ static void exynos4_mct_comp0_stop(void) static void exynos4_mct_comp0_start(bool periodic, unsigned long cycles) { unsigned int tcon; - cycle_t comp_cycle; + u64 comp_cycle; tcon = readl_relaxed(reg_base + EXYNOS4_MCT_G_TCON); --- a/drivers/clocksource/h8300_timer16.c +++ b/drivers/clocksource/h8300_timer16.c @@ -72,7 +72,7 @@ static inline struct timer16_priv *cs_to return container_of(cs, struct timer16_priv, cs); } -static cycle_t timer16_clocksource_read(struct clocksource *cs) +static u64 timer16_clocksource_read(struct clocksource *cs) { struct timer16_priv *p = cs_to_priv(cs); unsigned long raw, value; --- a/drivers/clocksource/h8300_tpu.c +++ b/drivers/clocksource/h8300_tpu.c @@ -64,7 +64,7 @@ static inline struct tpu_priv *cs_to_pri return container_of(cs, struct tpu_priv, cs); } -static cycle_t tpu_clocksource_read(struct clocksource *cs) +static u64 tpu_clocksource_read(struct clocksource *cs) { struct tpu_priv *p = cs_to_priv(cs); unsigned long flags; --- a/drivers/clocksource/i8253.c +++ b/drivers/clocksource/i8253.c @@ -25,7 +25,7 @@ EXPORT_SYMBOL(i8253_lock); * to just read by itself. So use jiffies to emulate a free * running counter: */ -static cycle_t i8253_read(struct clocksource *cs) +static u64 i8253_read(struct clocksource *cs) { static int old_count; static u32 old_jifs; @@ -83,7 +83,7 @@ static cycle_t i8253_read(struct clockso count = (PIT_LATCH - 1) - count; - return (cycle_t)(jifs * PIT_LATCH) + count; + return (u64)(jifs * PIT_LATCH) + count; } static struct clocksource i8253_cs = { --- a/drivers/clocksource/jcore-pit.c +++ b/drivers/clocksource/jcore-pit.c @@ -57,7 +57,7 @@ static notrace u64 jcore_sched_clock_rea return seclo * NSEC_PER_SEC + nsec; } -static cycle_t jcore_clocksource_read(struct clocksource *cs) +static u64 jcore_clocksource_read(struct clocksource *cs) { return jcore_sched_clock_read(); } --- a/drivers/clocksource/metag_generic.c +++ b/drivers/clocksource/metag_generic.c @@ -56,7 +56,7 @@ static int metag_timer_set_next_event(un return 0; } -static cycle_t metag_clocksource_read(struct clocksource *cs) +static u64 metag_clocksource_read(struct clocksource *cs) { return __core_reg_get(TXTIMER); } --- a/drivers/clocksource/mips-gic-timer.c +++ b/drivers/clocksource/mips-gic-timer.c @@ -125,7 +125,7 @@ static int gic_clockevent_init(void) return 0; } -static cycle_t gic_hpt_read(struct clocksource *cs) +static u64 gic_hpt_read(struct clocksource *cs) { return gic_read_count(); } --- a/drivers/clocksource/mmio.c +++ b/drivers/clocksource/mmio.c @@ -20,24 +20,24 @@ static inline struct clocksource_mmio *t return container_of(c, struct clocksource_mmio, clksrc); } -cycle_t clocksource_mmio_readl_up(struct clocksource *c) +u64 clocksource_mmio_readl_up(struct clocksource *c) { - return (cycle_t)readl_relaxed(to_mmio_clksrc(c)->reg); + return (u64)readl_relaxed(to_mmio_clksrc(c)->reg); } -cycle_t clocksource_mmio_readl_down(struct clocksource *c) +u64 clocksource_mmio_readl_down(struct clocksource *c) { - return ~(cycle_t)readl_relaxed(to_mmio_clksrc(c)->reg) & c->mask; + return ~(u64)readl_relaxed(to_mmio_clksrc(c)->reg) & c->mask; } -cycle_t clocksource_mmio_readw_up(struct clocksource *c) +u64 clocksource_mmio_readw_up(struct clocksource *c) { - return (cycle_t)readw_relaxed(to_mmio_clksrc(c)->reg); + return (u64)readw_relaxed(to_mmio_clksrc(c)->reg); } -cycle_t clocksource_mmio_readw_down(struct clocksource *c) +u64 clocksource_mmio_readw_down(struct clocksource *c) { - return ~(cycle_t)readw_relaxed(to_mmio_clksrc(c)->reg) & c->mask; + return ~(u64)readw_relaxed(to_mmio_clksrc(c)->reg) & c->mask; } /** @@ -51,7 +51,7 @@ cycle_t clocksource_mmio_readw_down(stru */ int __init clocksource_mmio_init(void __iomem *base, const char *name, unsigned long hz, int rating, unsigned bits, - cycle_t (*read)(struct clocksource *)) + u64 (*read)(struct clocksource *)) { struct clocksource_mmio *cs; --- a/drivers/clocksource/mxs_timer.c +++ b/drivers/clocksource/mxs_timer.c @@ -97,7 +97,7 @@ static void timrot_irq_acknowledge(void) HW_TIMROT_TIMCTRLn(0) + STMP_OFFSET_REG_CLR); } -static cycle_t timrotv1_get_cycles(struct clocksource *cs) +static u64 timrotv1_get_cycles(struct clocksource *cs) { return ~((__raw_readl(mxs_timrot_base + HW_TIMROT_TIMCOUNTn(1)) & 0xffff0000) >> 16); --- a/drivers/clocksource/qcom-timer.c +++ b/drivers/clocksource/qcom-timer.c @@ -89,7 +89,7 @@ static struct clock_event_device __percp static void __iomem *source_base; -static notrace cycle_t msm_read_timer_count(struct clocksource *cs) +static notrace u64 msm_read_timer_count(struct clocksource *cs) { return readl_relaxed(source_base + TIMER_COUNT_VAL); } --- a/drivers/clocksource/samsung_pwm_timer.c +++ b/drivers/clocksource/samsung_pwm_timer.c @@ -307,7 +307,7 @@ static void samsung_clocksource_resume(s samsung_time_start(pwm.source_id, true); } -static cycle_t notrace samsung_clocksource_read(struct clocksource *c) +static u64 notrace samsung_clocksource_read(struct clocksource *c) { return ~readl_relaxed(pwm.source_reg); } --- a/drivers/clocksource/scx200_hrt.c +++ b/drivers/clocksource/scx200_hrt.c @@ -43,10 +43,10 @@ MODULE_PARM_DESC(ppm, "+-adjust to actua /* The base timer frequency, * 27 if selected */ #define HRT_FREQ 1000000 -static cycle_t read_hrt(struct clocksource *cs) +static u64 read_hrt(struct clocksource *cs) { /* Read the timer value */ - return (cycle_t) inl(scx200_cb_base + SCx200_TIMER_OFFSET); + return (u64) inl(scx200_cb_base + SCx200_TIMER_OFFSET); } static struct clocksource cs_hrt = { --- a/drivers/clocksource/sh_cmt.c +++ b/drivers/clocksource/sh_cmt.c @@ -612,7 +612,7 @@ static struct sh_cmt_channel *cs_to_sh_c return container_of(cs, struct sh_cmt_channel, cs); } -static cycle_t sh_cmt_clocksource_read(struct clocksource *cs) +static u64 sh_cmt_clocksource_read(struct clocksource *cs) { struct sh_cmt_channel *ch = cs_to_sh_cmt(cs); unsigned long flags, raw; --- a/drivers/clocksource/sh_tmu.c +++ b/drivers/clocksource/sh_tmu.c @@ -255,7 +255,7 @@ static struct sh_tmu_channel *cs_to_sh_t return container_of(cs, struct sh_tmu_channel, cs); } -static cycle_t sh_tmu_clocksource_read(struct clocksource *cs) +static u64 sh_tmu_clocksource_read(struct clocksource *cs) { struct sh_tmu_channel *ch = cs_to_sh_tmu(cs); --- a/drivers/clocksource/tcb_clksrc.c +++ b/drivers/clocksource/tcb_clksrc.c @@ -41,7 +41,7 @@ static void __iomem *tcaddr; -static cycle_t tc_get_cycles(struct clocksource *cs) +static u64 tc_get_cycles(struct clocksource *cs) { unsigned long flags; u32 lower, upper; @@ -56,7 +56,7 @@ static cycle_t tc_get_cycles(struct cloc return (upper << 16) | lower; } -static cycle_t tc_get_cycles32(struct clocksource *cs) +static u64 tc_get_cycles32(struct clocksource *cs) { return __raw_readl(tcaddr + ATMEL_TC_REG(0, CV)); } --- a/drivers/clocksource/time-pistachio.c +++ b/drivers/clocksource/time-pistachio.c @@ -67,7 +67,7 @@ static inline void gpt_writel(void __iom writel(value, base + 0x20 * gpt_id + offset); } -static cycle_t notrace +static u64 notrace pistachio_clocksource_read_cycles(struct clocksource *cs) { struct pistachio_clocksource *pcs = to_pistachio_clocksource(cs); @@ -84,7 +84,7 @@ pistachio_clocksource_read_cycles(struct counter = gpt_readl(pcs->base, TIMER_CURRENT_VALUE, 0); raw_spin_unlock_irqrestore(&pcs->lock, flags); - return (cycle_t)~counter; + return (u64)~counter; } static u64 notrace pistachio_read_sched_clock(void) --- a/drivers/clocksource/timer-atlas7.c +++ b/drivers/clocksource/timer-atlas7.c @@ -85,7 +85,7 @@ static irqreturn_t sirfsoc_timer_interru } /* read 64-bit timer counter */ -static cycle_t sirfsoc_timer_read(struct clocksource *cs) +static u64 sirfsoc_timer_read(struct clocksource *cs) { u64 cycles; --- a/drivers/clocksource/timer-atmel-pit.c +++ b/drivers/clocksource/timer-atmel-pit.c @@ -73,7 +73,7 @@ static inline void pit_write(void __iome * Clocksource: just a monotonic counter of MCK/16 cycles. * We don't care whether or not PIT irqs are enabled. */ -static cycle_t read_pit_clk(struct clocksource *cs) +static u64 read_pit_clk(struct clocksource *cs) { struct pit_data *data = clksrc_to_pit_data(cs); unsigned long flags; --- a/drivers/clocksource/timer-atmel-st.c +++ b/drivers/clocksource/timer-atmel-st.c @@ -92,7 +92,7 @@ static irqreturn_t at91rm9200_timer_inte return IRQ_NONE; } -static cycle_t read_clk32k(struct clocksource *cs) +static u64 read_clk32k(struct clocksource *cs) { return read_CRTR(); } --- a/drivers/clocksource/timer-nps.c +++ b/drivers/clocksource/timer-nps.c @@ -48,11 +48,11 @@ static void *nps_msu_reg_low_addr[NPS_CL static unsigned long nps_timer_rate; -static cycle_t nps_clksrc_read(struct clocksource *clksrc) +static u64 nps_clksrc_read(struct clocksource *clksrc) { int cluster = raw_smp_processor_id() >> NPS_CLUSTER_OFFSET; - return (cycle_t)ioread32be(nps_msu_reg_low_addr[cluster]); + return (u64)ioread32be(nps_msu_reg_low_addr[cluster]); } static int __init nps_setup_clocksource(struct device_node *node, --- a/drivers/clocksource/timer-prima2.c +++ b/drivers/clocksource/timer-prima2.c @@ -72,7 +72,7 @@ static irqreturn_t sirfsoc_timer_interru } /* read 64-bit timer counter */ -static cycle_t notrace sirfsoc_timer_read(struct clocksource *cs) +static u64 notrace sirfsoc_timer_read(struct clocksource *cs) { u64 cycles; --- a/drivers/clocksource/timer-sun5i.c +++ b/drivers/clocksource/timer-sun5i.c @@ -152,7 +152,7 @@ static irqreturn_t sun5i_timer_interrupt return IRQ_HANDLED; } -static cycle_t sun5i_clksrc_read(struct clocksource *clksrc) +static u64 sun5i_clksrc_read(struct clocksource *clksrc) { struct sun5i_timer_clksrc *cs = to_sun5i_timer_clksrc(clksrc); --- a/drivers/clocksource/timer-ti-32k.c +++ b/drivers/clocksource/timer-ti-32k.c @@ -65,11 +65,11 @@ static inline struct ti_32k *to_ti_32k(s return container_of(cs, struct ti_32k, cs); } -static cycle_t notrace ti_32k_read_cycles(struct clocksource *cs) +static u64 notrace ti_32k_read_cycles(struct clocksource *cs) { struct ti_32k *ti = to_ti_32k(cs); - return (cycle_t)readl_relaxed(ti->counter); + return (u64)readl_relaxed(ti->counter); } static struct ti_32k ti_32k_timer = { --- a/drivers/clocksource/vt8500_timer.c +++ b/drivers/clocksource/vt8500_timer.c @@ -53,7 +53,7 @@ static void __iomem *regbase; -static cycle_t vt8500_timer_read(struct clocksource *cs) +static u64 vt8500_timer_read(struct clocksource *cs) { int loops = msecs_to_loops(10); writel(3, regbase + TIMER_CTRL_VAL); @@ -75,7 +75,7 @@ static int vt8500_timer_set_next_event(u struct clock_event_device *evt) { int loops = msecs_to_loops(10); - cycle_t alarm = clocksource.read(&clocksource) + cycles; + u64 alarm = clocksource.read(&clocksource) + cycles; while ((readl(regbase + TIMER_AS_VAL) & TIMER_MATCH_W_ACTIVE) && --loops) cpu_relax(); --- a/drivers/hv/hv.c +++ b/drivers/hv/hv.c @@ -135,9 +135,9 @@ u64 hv_do_hypercall(u64 control, void *i EXPORT_SYMBOL_GPL(hv_do_hypercall); #ifdef CONFIG_X86_64 -static cycle_t read_hv_clock_tsc(struct clocksource *arg) +static u64 read_hv_clock_tsc(struct clocksource *arg) { - cycle_t current_tick; + u64 current_tick; struct ms_hyperv_tsc_page *tsc_pg = hv_context.tsc_page; if (tsc_pg->tsc_sequence != 0) { @@ -146,7 +146,7 @@ static cycle_t read_hv_clock_tsc(struct */ while (1) { - cycle_t tmp; + u64 tmp; u32 sequence = tsc_pg->tsc_sequence; u64 cur_tsc; u64 scale = tsc_pg->tsc_scale; @@ -350,7 +350,7 @@ int hv_post_message(union hv_connection_ static int hv_ce_set_next_event(unsigned long delta, struct clock_event_device *evt) { - cycle_t current_tick; + u64 current_tick; WARN_ON(!clockevent_state_oneshot(evt)); --- a/drivers/irqchip/irq-mips-gic.c +++ b/drivers/irqchip/irq-mips-gic.c @@ -152,12 +152,12 @@ static inline void gic_map_to_vpe(unsign } #ifdef CONFIG_CLKSRC_MIPS_GIC -cycle_t gic_read_count(void) +u64 gic_read_count(void) { unsigned int hi, hi2, lo; if (mips_cm_is64) - return (cycle_t)gic_read(GIC_REG(SHARED, GIC_SH_COUNTER)); + return (u64)gic_read(GIC_REG(SHARED, GIC_SH_COUNTER)); do { hi = gic_read32(GIC_REG(SHARED, GIC_SH_COUNTER_63_32)); @@ -165,7 +165,7 @@ cycle_t gic_read_count(void) hi2 = gic_read32(GIC_REG(SHARED, GIC_SH_COUNTER_63_32)); } while (hi2 != hi); - return (((cycle_t) hi) << 32) + lo; + return (((u64) hi) << 32) + lo; } unsigned int gic_get_count_width(void) @@ -179,7 +179,7 @@ unsigned int gic_get_count_width(void) return bits; } -void gic_write_compare(cycle_t cnt) +void gic_write_compare(u64 cnt) { if (mips_cm_is64) { gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_COMPARE), cnt); @@ -191,7 +191,7 @@ void gic_write_compare(cycle_t cnt) } } -void gic_write_cpu_compare(cycle_t cnt, int cpu) +void gic_write_cpu_compare(u64 cnt, int cpu) { unsigned long flags; @@ -211,17 +211,17 @@ void gic_write_cpu_compare(cycle_t cnt, local_irq_restore(flags); } -cycle_t gic_read_compare(void) +u64 gic_read_compare(void) { unsigned int hi, lo; if (mips_cm_is64) - return (cycle_t)gic_read(GIC_REG(VPE_LOCAL, GIC_VPE_COMPARE)); + return (u64)gic_read(GIC_REG(VPE_LOCAL, GIC_VPE_COMPARE)); hi = gic_read32(GIC_REG(VPE_LOCAL, GIC_VPE_COMPARE_HI)); lo = gic_read32(GIC_REG(VPE_LOCAL, GIC_VPE_COMPARE_LO)); - return (((cycle_t) hi) << 32) + lo; + return (((u64) hi) << 32) + lo; } void gic_start_count(void) --- a/drivers/net/ethernet/amd/xgbe/xgbe-ptp.c +++ b/drivers/net/ethernet/amd/xgbe/xgbe-ptp.c @@ -122,7 +122,7 @@ #include "xgbe.h" #include "xgbe-common.h" -static cycle_t xgbe_cc_read(const struct cyclecounter *cc) +static u64 xgbe_cc_read(const struct cyclecounter *cc) { struct xgbe_prv_data *pdata = container_of(cc, struct xgbe_prv_data, --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c @@ -15219,7 +15219,7 @@ void bnx2x_set_rx_ts(struct bnx2x *bp, s } /* Read the PHC */ -static cycle_t bnx2x_cyclecounter_read(const struct cyclecounter *cc) +static u64 bnx2x_cyclecounter_read(const struct cyclecounter *cc) { struct bnx2x *bp = container_of(cc, struct bnx2x, cyclecounter); int port = BP_PORT(bp); --- a/drivers/net/ethernet/freescale/fec_ptp.c +++ b/drivers/net/ethernet/freescale/fec_ptp.c @@ -230,7 +230,7 @@ static int fec_ptp_enable_pps(struct fec * cyclecounter structure used to construct a ns counter from the * arbitrary fixed point registers */ -static cycle_t fec_ptp_read(const struct cyclecounter *cc) +static u64 fec_ptp_read(const struct cyclecounter *cc) { struct fec_enet_private *fep = container_of(cc, struct fec_enet_private, cc); --- a/drivers/net/ethernet/intel/e1000e/netdev.c +++ b/drivers/net/ethernet/intel/e1000e/netdev.c @@ -4305,24 +4305,24 @@ void e1000e_reinit_locked(struct e1000_a /** * e1000e_sanitize_systim - sanitize raw cycle counter reads * @hw: pointer to the HW structure - * @systim: cycle_t value read, sanitized and returned + * @systim: u64 timestamp value read, sanitized and returned * * Errata for 82574/82583 possible bad bits read from SYSTIMH/L: * check to see that the time is incrementing at a reasonable * rate and is a multiple of incvalue. **/ -static cycle_t e1000e_sanitize_systim(struct e1000_hw *hw, cycle_t systim) +static u64 e1000e_sanitize_systim(struct e1000_hw *hw, u64 systim) { u64 time_delta, rem, temp; - cycle_t systim_next; + u64 systim_next; u32 incvalue; int i; incvalue = er32(TIMINCA) & E1000_TIMINCA_INCVALUE_MASK; for (i = 0; i < E1000_MAX_82574_SYSTIM_REREADS; i++) { /* latch SYSTIMH on read of SYSTIML */ - systim_next = (cycle_t)er32(SYSTIML); - systim_next |= (cycle_t)er32(SYSTIMH) << 32; + systim_next = (u64)er32(SYSTIML); + systim_next |= (u64)er32(SYSTIMH) << 32; time_delta = systim_next - systim; temp = time_delta; @@ -4342,13 +4342,13 @@ static cycle_t e1000e_sanitize_systim(st * e1000e_cyclecounter_read - read raw cycle counter (used by time counter) * @cc: cyclecounter structure **/ -static cycle_t e1000e_cyclecounter_read(const struct cyclecounter *cc) +static u64 e1000e_cyclecounter_read(const struct cyclecounter *cc) { struct e1000_adapter *adapter = container_of(cc, struct e1000_adapter, cc); struct e1000_hw *hw = &adapter->hw; u32 systimel, systimeh; - cycle_t systim; + u64 systim; /* SYSTIMH latching upon SYSTIML read does not work well. * This means that if SYSTIML overflows after we read it but before * we read SYSTIMH, the value of SYSTIMH has been incremented and we @@ -4368,8 +4368,8 @@ static cycle_t e1000e_cyclecounter_read( systimel = systimel_2; } } - systim = (cycle_t)systimel; - systim |= (cycle_t)systimeh << 32; + systim = (u64)systimel; + systim |= (u64)systimeh << 32; if (adapter->flags2 & FLAG2_CHECK_SYSTIM_OVERFLOW) systim = e1000e_sanitize_systim(hw, systim); --- a/drivers/net/ethernet/intel/e1000e/ptp.c +++ b/drivers/net/ethernet/intel/e1000e/ptp.c @@ -127,8 +127,8 @@ static int e1000e_phc_get_syncdevicetime unsigned long flags; int i; u32 tsync_ctrl; - cycle_t dev_cycles; - cycle_t sys_cycles; + u64 dev_cycles; + u64 sys_cycles; tsync_ctrl = er32(TSYNCTXCTL); tsync_ctrl |= E1000_TSYNCTXCTL_START_SYNC | --- a/drivers/net/ethernet/intel/igb/igb_ptp.c +++ b/drivers/net/ethernet/intel/igb/igb_ptp.c @@ -77,7 +77,7 @@ static void igb_ptp_tx_hwtstamp(struct igb_adapter *adapter); /* SYSTIM read access for the 82576 */ -static cycle_t igb_ptp_read_82576(const struct cyclecounter *cc) +static u64 igb_ptp_read_82576(const struct cyclecounter *cc) { struct igb_adapter *igb = container_of(cc, struct igb_adapter, cc); struct e1000_hw *hw = &igb->hw; @@ -94,7 +94,7 @@ static cycle_t igb_ptp_read_82576(const } /* SYSTIM read access for the 82580 */ -static cycle_t igb_ptp_read_82580(const struct cyclecounter *cc) +static u64 igb_ptp_read_82580(const struct cyclecounter *cc) { struct igb_adapter *igb = container_of(cc, struct igb_adapter, cc); struct e1000_hw *hw = &igb->hw; --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c @@ -245,7 +245,7 @@ static void ixgbe_ptp_setup_sdp_x540(str * result of SYSTIME is 32bits of "billions of cycles" and 32 bits of * "cycles", rather than seconds and nanoseconds. */ -static cycle_t ixgbe_ptp_read_X550(const struct cyclecounter *hw_cc) +static u64 ixgbe_ptp_read_X550(const struct cyclecounter *hw_cc) { struct ixgbe_adapter *adapter = container_of(hw_cc, struct ixgbe_adapter, hw_cc); @@ -282,7 +282,7 @@ static cycle_t ixgbe_ptp_read_X550(const * cyclecounter structure used to construct a ns counter from the * arbitrary fixed point registers */ -static cycle_t ixgbe_ptp_read_82599(const struct cyclecounter *cc) +static u64 ixgbe_ptp_read_82599(const struct cyclecounter *cc) { struct ixgbe_adapter *adapter = container_of(cc, struct ixgbe_adapter, hw_cc); --- a/drivers/net/ethernet/mellanox/mlx4/en_clock.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_clock.c @@ -38,7 +38,7 @@ /* mlx4_en_read_clock - read raw cycle counter (to be used by time counter) */ -static cycle_t mlx4_en_read_clock(const struct cyclecounter *tc) +static u64 mlx4_en_read_clock(const struct cyclecounter *tc) { struct mlx4_en_dev *mdev = container_of(tc, struct mlx4_en_dev, cycles); --- a/drivers/net/ethernet/mellanox/mlx4/main.c +++ b/drivers/net/ethernet/mellanox/mlx4/main.c @@ -1823,10 +1823,10 @@ static void unmap_bf_area(struct mlx4_de io_mapping_free(mlx4_priv(dev)->bf_mapping); } -cycle_t mlx4_read_clock(struct mlx4_dev *dev) +u64 mlx4_read_clock(struct mlx4_dev *dev) { u32 clockhi, clocklo, clockhi1; - cycle_t cycles; + u64 cycles; int i; struct mlx4_priv *priv = mlx4_priv(dev); --- a/drivers/net/ethernet/mellanox/mlx5/core/en_clock.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_clock.c @@ -49,7 +49,7 @@ void mlx5e_fill_hwstamp(struct mlx5e_tst hwts->hwtstamp = ns_to_ktime(nsec); } -static cycle_t mlx5e_read_internal_timer(const struct cyclecounter *cc) +static u64 mlx5e_read_internal_timer(const struct cyclecounter *cc) { struct mlx5e_tstamp *tstamp = container_of(cc, struct mlx5e_tstamp, cycles); --- a/drivers/net/ethernet/mellanox/mlx5/core/main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c @@ -522,7 +522,7 @@ int mlx5_core_disable_hca(struct mlx5_co return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out)); } -cycle_t mlx5_read_internal_timer(struct mlx5_core_dev *dev) +u64 mlx5_read_internal_timer(struct mlx5_core_dev *dev) { u32 timer_h, timer_h1, timer_l; @@ -532,7 +532,7 @@ cycle_t mlx5_read_internal_timer(struct if (timer_h != timer_h1) /* wrap around */ timer_l = ioread32be(&dev->iseg->internal_timer_l); - return (cycle_t)timer_l | (cycle_t)timer_h1 << 32; + return (u64)timer_l | (u64)timer_h1 << 32; } static int mlx5_irq_set_affinity_hint(struct mlx5_core_dev *mdev, int i) --- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h @@ -93,7 +93,7 @@ bool mlx5_sriov_is_enabled(struct mlx5_c int mlx5_core_enable_hca(struct mlx5_core_dev *dev, u16 func_id); int mlx5_core_disable_hca(struct mlx5_core_dev *dev, u16 func_id); int mlx5_wait_for_vf_pages(struct mlx5_core_dev *dev); -cycle_t mlx5_read_internal_timer(struct mlx5_core_dev *dev); +u64 mlx5_read_internal_timer(struct mlx5_core_dev *dev); u32 mlx5_get_msix_vec(struct mlx5_core_dev *dev, int vecidx); struct mlx5_eq *mlx5_eqn2eq(struct mlx5_core_dev *dev, int eqn); void mlx5_cq_tasklet_cb(unsigned long data); --- a/drivers/net/ethernet/ti/cpts.c +++ b/drivers/net/ethernet/ti/cpts.c @@ -101,7 +101,7 @@ static int cpts_fifo_read(struct cpts *c return type == match ? 0 : -1; } -static cycle_t cpts_systim_read(const struct cyclecounter *cc) +static u64 cpts_systim_read(const struct cyclecounter *cc) { u64 val = 0; struct cpts_event *event; --- a/include/kvm/arm_arch_timer.h +++ b/include/kvm/arm_arch_timer.h @@ -25,13 +25,13 @@ struct arch_timer_kvm { /* Virtual offset */ - cycle_t cntvoff; + u64 cntvoff; }; struct arch_timer_cpu { /* Registers: control register, timer value */ u32 cntv_ctl; /* Saved/restored */ - cycle_t cntv_cval; /* Saved/restored */ + u64 cntv_cval; /* Saved/restored */ /* * Anything that is not used directly from assembly code goes --- a/include/linux/clocksource.h +++ b/include/linux/clocksource.h @@ -75,8 +75,8 @@ struct module; * structure. */ struct clocksource { - cycle_t (*read)(struct clocksource *cs); - cycle_t mask; + u64 (*read)(struct clocksource *cs); + u64 mask; u32 mult; u32 shift; u64 max_idle_ns; @@ -98,8 +98,8 @@ struct clocksource { #ifdef CONFIG_CLOCKSOURCE_WATCHDOG /* Watchdog related data, used by the framework */ struct list_head wd_list; - cycle_t cs_last; - cycle_t wd_last; + u64 cs_last; + u64 wd_last; #endif struct module *owner; }; @@ -117,7 +117,7 @@ struct clocksource { #define CLOCK_SOURCE_RESELECT 0x100 /* simplify initialization of mask field */ -#define CLOCKSOURCE_MASK(bits) (cycle_t)((bits) < 64 ? ((1ULL<<(bits))-1) : -1) +#define CLOCKSOURCE_MASK(bits) (u64)((bits) < 64 ? ((1ULL<<(bits))-1) : -1) static inline u32 clocksource_freq2mult(u32 freq, u32 shift_constant, u64 from) { @@ -173,7 +173,7 @@ static inline u32 clocksource_hz2mult(u3 * * XXX - This could use some mult_lxl_ll() asm optimization */ -static inline s64 clocksource_cyc2ns(cycle_t cycles, u32 mult, u32 shift) +static inline s64 clocksource_cyc2ns(u64 cycles, u32 mult, u32 shift) { return ((u64) cycles * mult) >> shift; } @@ -233,13 +233,13 @@ static inline void __clocksource_update_ extern int timekeeping_notify(struct clocksource *clock); -extern cycle_t clocksource_mmio_readl_up(struct clocksource *); -extern cycle_t clocksource_mmio_readl_down(struct clocksource *); -extern cycle_t clocksource_mmio_readw_up(struct clocksource *); -extern cycle_t clocksource_mmio_readw_down(struct clocksource *); +extern u64 clocksource_mmio_readl_up(struct clocksource *); +extern u64 clocksource_mmio_readl_down(struct clocksource *); +extern u64 clocksource_mmio_readw_up(struct clocksource *); +extern u64 clocksource_mmio_readw_down(struct clocksource *); extern int clocksource_mmio_init(void __iomem *, const char *, - unsigned long, int, unsigned, cycle_t (*)(struct clocksource *)); + unsigned long, int, unsigned, u64 (*)(struct clocksource *)); extern int clocksource_i8253_init(void); --- a/include/linux/dw_apb_timer.h +++ b/include/linux/dw_apb_timer.h @@ -50,6 +50,6 @@ dw_apb_clocksource_init(unsigned rating, unsigned long freq); void dw_apb_clocksource_register(struct dw_apb_clocksource *dw_cs); void dw_apb_clocksource_start(struct dw_apb_clocksource *dw_cs); -cycle_t dw_apb_clocksource_read(struct dw_apb_clocksource *dw_cs); +u64 dw_apb_clocksource_read(struct dw_apb_clocksource *dw_cs); #endif /* __DW_APB_TIMER_H__ */ --- a/include/linux/irqchip/mips-gic.h +++ b/include/linux/irqchip/mips-gic.h @@ -259,11 +259,11 @@ extern void gic_init(unsigned long gic_b unsigned long gic_addrspace_size, unsigned int cpu_vec, unsigned int irqbase); extern void gic_clocksource_init(unsigned int); -extern cycle_t gic_read_count(void); +extern u64 gic_read_count(void); extern unsigned int gic_get_count_width(void); -extern cycle_t gic_read_compare(void); -extern void gic_write_compare(cycle_t cnt); -extern void gic_write_cpu_compare(cycle_t cnt, int cpu); +extern u64 gic_read_compare(void); +extern void gic_write_compare(u64 cnt); +extern void gic_write_cpu_compare(u64 cnt, int cpu); extern void gic_start_count(void); extern void gic_stop_count(void); extern int gic_get_c0_compare_int(void); --- a/include/linux/mlx4/device.h +++ b/include/linux/mlx4/device.h @@ -1461,7 +1461,7 @@ int mlx4_get_roce_gid_from_slave(struct int mlx4_FLOW_STEERING_IB_UC_QP_RANGE(struct mlx4_dev *dev, u32 min_range_qpn, u32 max_range_qpn); -cycle_t mlx4_read_clock(struct mlx4_dev *dev); +u64 mlx4_read_clock(struct mlx4_dev *dev); struct mlx4_active_ports { DECLARE_BITMAP(ports, MLX4_MAX_PORTS); --- a/include/linux/timecounter.h +++ b/include/linux/timecounter.h @@ -20,7 +20,7 @@ #include <linux/types.h> /* simplify initialization of mask field */ -#define CYCLECOUNTER_MASK(bits) (cycle_t)((bits) < 64 ? ((1ULL<<(bits))-1) : -1) +#define CYCLECOUNTER_MASK(bits) (u64)((bits) < 64 ? ((1ULL<<(bits))-1) : -1) /** * struct cyclecounter - hardware abstraction for a free running counter @@ -37,8 +37,8 @@ * @shift: cycle to nanosecond divisor (power of two) */ struct cyclecounter { - cycle_t (*read)(const struct cyclecounter *cc); - cycle_t mask; + u64 (*read)(const struct cyclecounter *cc); + u64 mask; u32 mult; u32 shift; }; @@ -63,7 +63,7 @@ struct cyclecounter { */ struct timecounter { const struct cyclecounter *cc; - cycle_t cycle_last; + u64 cycle_last; u64 nsec; u64 mask; u64 frac; @@ -77,7 +77,7 @@ struct timecounter { * @frac: pointer to storage for the fractional nanoseconds. */ static inline u64 cyclecounter_cyc2ns(const struct cyclecounter *cc, - cycle_t cycles, u64 mask, u64 *frac) + u64 cycles, u64 mask, u64 *frac) { u64 ns = (u64) cycles; @@ -134,6 +134,6 @@ extern u64 timecounter_read(struct timec * in the past. */ extern u64 timecounter_cyc2time(struct timecounter *tc, - cycle_t cycle_tstamp); + u64 cycle_tstamp); #endif --- a/include/linux/timekeeper_internal.h +++ b/include/linux/timekeeper_internal.h @@ -29,9 +29,9 @@ */ struct tk_read_base { struct clocksource *clock; - cycle_t (*read)(struct clocksource *cs); - cycle_t mask; - cycle_t cycle_last; + u64 (*read)(struct clocksource *cs); + u64 mask; + u64 cycle_last; u32 mult; u32 shift; u64 xtime_nsec; @@ -97,7 +97,7 @@ struct timekeeper { struct timespec64 raw_time; /* The following members are for timekeeping internal use */ - cycle_t cycle_interval; + u64 cycle_interval; u64 xtime_interval; s64 xtime_remainder; u32 raw_interval; @@ -136,7 +136,7 @@ extern void update_vsyscall_tz(void); extern void update_vsyscall_old(struct timespec *ts, struct timespec *wtm, struct clocksource *c, u32 mult, - cycle_t cycle_last); + u64 cycle_last); extern void update_vsyscall_tz(void); #else --- a/include/linux/timekeeping.h +++ b/include/linux/timekeeping.h @@ -292,7 +292,7 @@ extern void ktime_get_raw_and_real_ts64( * @cs_was_changed_seq: The sequence number of clocksource change events */ struct system_time_snapshot { - cycle_t cycles; + u64 cycles; ktime_t real; ktime_t raw; unsigned int clock_was_set_seq; @@ -320,7 +320,7 @@ struct system_device_crosststamp { * timekeeping code to verify comparibility of two cycle values */ struct system_counterval_t { - cycle_t cycles; + u64 cycles; struct clocksource *cs; }; --- a/include/linux/types.h +++ b/include/linux/types.h @@ -228,8 +228,5 @@ struct callback_head { typedef void (*rcu_callback_t)(struct rcu_head *head); typedef void (*call_rcu_func_t)(struct rcu_head *head, rcu_callback_t func); -/* clocksource cycle base type */ -typedef u64 cycle_t; - #endif /* __ASSEMBLY__ */ #endif /* _LINUX_TYPES_H */ --- a/kernel/time/clocksource.c +++ b/kernel/time/clocksource.c @@ -169,7 +169,7 @@ void clocksource_mark_unstable(struct cl static void clocksource_watchdog(unsigned long data) { struct clocksource *cs; - cycle_t csnow, wdnow, cslast, wdlast, delta; + u64 csnow, wdnow, cslast, wdlast, delta; int64_t wd_nsec, cs_nsec; int next_cpu, reset_pending; --- a/kernel/time/jiffies.c +++ b/kernel/time/jiffies.c @@ -59,9 +59,9 @@ #define JIFFIES_SHIFT 8 #endif -static cycle_t jiffies_read(struct clocksource *cs) +static u64 jiffies_read(struct clocksource *cs) { - return (cycle_t) jiffies; + return (u64) jiffies; } static struct clocksource clocksource_jiffies = { --- a/kernel/time/timecounter.c +++ b/kernel/time/timecounter.c @@ -43,7 +43,7 @@ EXPORT_SYMBOL_GPL(timecounter_init); */ static u64 timecounter_read_delta(struct timecounter *tc) { - cycle_t cycle_now, cycle_delta; + u64 cycle_now, cycle_delta; u64 ns_offset; /* read cycle counter: */ @@ -80,7 +80,7 @@ EXPORT_SYMBOL_GPL(timecounter_read); * time previous to the time stored in the cycle counter. */ static u64 cc_cyc2ns_backwards(const struct cyclecounter *cc, - cycle_t cycles, u64 mask, u64 frac) + u64 cycles, u64 mask, u64 frac) { u64 ns = (u64) cycles; @@ -90,7 +90,7 @@ static u64 cc_cyc2ns_backwards(const str } u64 timecounter_cyc2time(struct timecounter *tc, - cycle_t cycle_tstamp) + u64 cycle_tstamp) { u64 delta = (cycle_tstamp - tc->cycle_last) & tc->cc->mask; u64 nsec = tc->nsec, frac = tc->frac; --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -119,10 +119,10 @@ static inline void tk_update_sleep_time( #ifdef CONFIG_DEBUG_TIMEKEEPING #define WARNING_FREQ (HZ*300) /* 5 minute rate-limiting */ -static void timekeeping_check_update(struct timekeeper *tk, cycle_t offset) +static void timekeeping_check_update(struct timekeeper *tk, u64 offset) { - cycle_t max_cycles = tk->tkr_mono.clock->max_cycles; + u64 max_cycles = tk->tkr_mono.clock->max_cycles; const char *name = tk->tkr_mono.clock->name; if (offset > max_cycles) { @@ -158,10 +158,10 @@ static void timekeeping_check_update(str } } -static inline cycle_t timekeeping_get_delta(struct tk_read_base *tkr) +static inline u64 timekeeping_get_delta(struct tk_read_base *tkr) { struct timekeeper *tk = &tk_core.timekeeper; - cycle_t now, last, mask, max, delta; + u64 now, last, mask, max, delta; unsigned int seq; /* @@ -199,12 +199,12 @@ static inline cycle_t timekeeping_get_de return delta; } #else -static inline void timekeeping_check_update(struct timekeeper *tk, cycle_t offset) +static inline void timekeeping_check_update(struct timekeeper *tk, u64 offset) { } -static inline cycle_t timekeeping_get_delta(struct tk_read_base *tkr) +static inline u64 timekeeping_get_delta(struct tk_read_base *tkr) { - cycle_t cycle_now, delta; + u64 cycle_now, delta; /* read clocksource */ cycle_now = tkr->read(tkr->clock); @@ -229,7 +229,7 @@ static inline cycle_t timekeeping_get_de */ static void tk_setup_internals(struct timekeeper *tk, struct clocksource *clock) { - cycle_t interval; + u64 interval; u64 tmp, ntpinterval; struct clocksource *old_clock; @@ -254,7 +254,7 @@ static void tk_setup_internals(struct ti if (tmp == 0) tmp = 1; - interval = (cycle_t) tmp; + interval = (u64) tmp; tk->cycle_interval = interval; /* Go back from cycles -> shifted ns */ @@ -346,16 +346,16 @@ static inline u64 timekeeping_delta_to_n static inline u64 timekeeping_get_ns(struct tk_read_base *tkr) { - cycle_t delta; + u64 delta; delta = timekeeping_get_delta(tkr); return timekeeping_delta_to_ns(tkr, delta); } static inline u64 timekeeping_cycles_to_ns(struct tk_read_base *tkr, - cycle_t cycles) + u64 cycles) { - cycle_t delta; + u64 delta; /* calculate the delta since the last update_wall_time */ delta = clocksource_delta(cycles, tkr->cycle_last, tkr->mask); @@ -459,9 +459,9 @@ u64 ktime_get_raw_fast_ns(void) EXPORT_SYMBOL_GPL(ktime_get_raw_fast_ns); /* Suspend-time cycles value for halted fast timekeeper. */ -static cycle_t cycles_at_suspend; +static u64 cycles_at_suspend; -static cycle_t dummy_clock_read(struct clocksource *cs) +static u64 dummy_clock_read(struct clocksource *cs) { return cycles_at_suspend; } @@ -655,7 +655,7 @@ static void timekeeping_update(struct ti static void timekeeping_forward_now(struct timekeeper *tk) { struct clocksource *clock = tk->tkr_mono.clock; - cycle_t cycle_now, delta; + u64 cycle_now, delta; u64 nsec; cycle_now = tk->tkr_mono.read(clock); @@ -928,7 +928,7 @@ void ktime_get_snapshot(struct system_ti ktime_t base_real; u64 nsec_raw; u64 nsec_real; - cycle_t now; + u64 now; WARN_ON_ONCE(timekeeping_suspended); @@ -987,8 +987,8 @@ static int scale64_check_overflow(u64 mu * interval is partial_history_cycles. */ static int adjust_historical_crosststamp(struct system_time_snapshot *history, - cycle_t partial_history_cycles, - cycle_t total_history_cycles, + u64 partial_history_cycles, + u64 total_history_cycles, bool discontinuity, struct system_device_crosststamp *ts) { @@ -1052,7 +1052,7 @@ static int adjust_historical_crosststamp /* * cycle_between - true if test occurs chronologically between before and after */ -static bool cycle_between(cycle_t before, cycle_t test, cycle_t after) +static bool cycle_between(u64 before, u64 test, u64 after) { if (test > before && test < after) return true; @@ -1082,7 +1082,7 @@ int get_device_system_crosststamp(int (* { struct system_counterval_t system_counterval; struct timekeeper *tk = &tk_core.timekeeper; - cycle_t cycles, now, interval_start; + u64 cycles, now, interval_start; unsigned int clock_was_set_seq = 0; ktime_t base_real, base_raw; u64 nsec_real, nsec_raw; @@ -1143,7 +1143,7 @@ int get_device_system_crosststamp(int (* * current interval */ if (do_interp) { - cycle_t partial_history_cycles, total_history_cycles; + u64 partial_history_cycles, total_history_cycles; bool discontinuity; /* @@ -1649,7 +1649,7 @@ void timekeeping_resume(void) struct clocksource *clock = tk->tkr_mono.clock; unsigned long flags; struct timespec64 ts_new, ts_delta; - cycle_t cycle_now; + u64 cycle_now; sleeptime_injected = false; read_persistent_clock64(&ts_new); @@ -2015,11 +2015,11 @@ static inline unsigned int accumulate_ns * * Returns the unconsumed cycles. */ -static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset, +static u64 logarithmic_accumulation(struct timekeeper *tk, u64 offset, u32 shift, unsigned int *clock_set) { - cycle_t interval = tk->cycle_interval << shift; + u64 interval = tk->cycle_interval << shift; u64 raw_nsecs; /* If the offset is smaller than a shifted interval, do nothing */ @@ -2060,7 +2060,7 @@ void update_wall_time(void) { struct timekeeper *real_tk = &tk_core.timekeeper; struct timekeeper *tk = &shadow_timekeeper; - cycle_t offset; + u64 offset; int shift = 0, maxshift; unsigned int clock_set = 0; unsigned long flags; --- a/kernel/time/timekeeping_internal.h +++ b/kernel/time/timekeeping_internal.h @@ -13,9 +13,9 @@ extern void tk_debug_account_sleep_time( #endif #ifdef CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE -static inline cycle_t clocksource_delta(cycle_t now, cycle_t last, cycle_t mask) +static inline u64 clocksource_delta(u64 now, u64 last, u64 mask) { - cycle_t ret = (now - last) & mask; + u64 ret = (now - last) & mask; /* * Prevent time going backwards by checking the MSB of mask in @@ -24,7 +24,7 @@ static inline cycle_t clocksource_delta( return ret & ~(mask >> 1) ? 0 : ret; } #else -static inline cycle_t clocksource_delta(cycle_t now, cycle_t last, cycle_t mask) +static inline u64 clocksource_delta(u64 now, u64 last, u64 mask) { return (now - last) & mask; } --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -2847,7 +2847,7 @@ static void ftrace_shutdown_sysctl(void) } } -static cycle_t ftrace_update_time; +static u64 ftrace_update_time; unsigned long ftrace_update_tot_cnt; static inline int ops_traces_mod(struct ftrace_ops *ops) @@ -2894,7 +2894,7 @@ static int ftrace_update_code(struct mod { struct ftrace_page *pg; struct dyn_ftrace *p; - cycle_t start, stop; + u64 start, stop; unsigned long update_cnt = 0; unsigned long rec_flags = 0; int i; --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -234,7 +234,7 @@ static int __init set_tracepoint_printk( } __setup("tp_printk", set_tracepoint_printk); -unsigned long long ns2usecs(cycle_t nsec) +unsigned long long ns2usecs(u64 nsec) { nsec += 500; do_div(nsec, 1000); @@ -571,7 +571,7 @@ int trace_pid_write(struct trace_pid_lis return read; } -static cycle_t buffer_ftrace_now(struct trace_buffer *buf, int cpu) +static u64 buffer_ftrace_now(struct trace_buffer *buf, int cpu) { u64 ts; @@ -585,7 +585,7 @@ static cycle_t buffer_ftrace_now(struct return ts; } -cycle_t ftrace_now(int cpu) +u64 ftrace_now(int cpu) { return buffer_ftrace_now(&global_trace.trace_buffer, cpu); } --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -157,7 +157,7 @@ struct trace_array_cpu { unsigned long policy; unsigned long rt_priority; unsigned long skipped_entries; - cycle_t preempt_timestamp; + u64 preempt_timestamp; pid_t pid; kuid_t uid; char comm[TASK_COMM_LEN]; @@ -175,7 +175,7 @@ struct trace_buffer { struct trace_array *tr; struct ring_buffer *buffer; struct trace_array_cpu __percpu *data; - cycle_t time_start; + u64 time_start; int cpu; }; @@ -686,7 +686,7 @@ static inline void __trace_stack(struct } #endif /* CONFIG_STACKTRACE */ -extern cycle_t ftrace_now(int cpu); +extern u64 ftrace_now(int cpu); extern void trace_find_cmdline(int pid, char comm[]); extern void trace_event_follow_fork(struct trace_array *tr, bool enable); @@ -733,7 +733,7 @@ extern int trace_selftest_startup_branch #endif /* CONFIG_FTRACE_STARTUP_TEST */ extern void *head_page(struct trace_array_cpu *data); -extern unsigned long long ns2usecs(cycle_t nsec); +extern unsigned long long ns2usecs(u64 nsec); extern int trace_vbprintk(unsigned long ip, const char *fmt, va_list args); extern int --- a/kernel/trace/trace_irqsoff.c +++ b/kernel/trace/trace_irqsoff.c @@ -286,7 +286,7 @@ static void irqsoff_print_header(struct /* * Should this new latency be reported/recorded? */ -static bool report_latency(struct trace_array *tr, cycle_t delta) +static bool report_latency(struct trace_array *tr, u64 delta) { if (tracing_thresh) { if (delta < tracing_thresh) @@ -304,7 +304,7 @@ check_critical_timing(struct trace_array unsigned long parent_ip, int cpu) { - cycle_t T0, T1, delta; + u64 T0, T1, delta; unsigned long flags; int pc; --- a/kernel/trace/trace_sched_wakeup.c +++ b/kernel/trace/trace_sched_wakeup.c @@ -346,7 +346,7 @@ static void wakeup_print_header(struct s /* * Should this new latency be reported/recorded? */ -static bool report_latency(struct trace_array *tr, cycle_t delta) +static bool report_latency(struct trace_array *tr, u64 delta) { if (tracing_thresh) { if (delta < tracing_thresh) @@ -428,7 +428,7 @@ probe_wakeup_sched_switch(void *ignore, struct task_struct *prev, struct task_struct *next) { struct trace_array_cpu *data; - cycle_t T0, T1, delta; + u64 T0, T1, delta; unsigned long flags; long disabled; int cpu; --- a/sound/hda/hdac_stream.c +++ b/sound/hda/hdac_stream.c @@ -465,7 +465,7 @@ int snd_hdac_stream_set_params(struct hd } EXPORT_SYMBOL_GPL(snd_hdac_stream_set_params); -static cycle_t azx_cc_read(const struct cyclecounter *cc) +static u64 azx_cc_read(const struct cyclecounter *cc) { struct hdac_stream *azx_dev = container_of(cc, struct hdac_stream, cc); @@ -473,7 +473,7 @@ static cycle_t azx_cc_read(const struct } static void azx_timecounter_init(struct hdac_stream *azx_dev, - bool force, cycle_t last) + bool force, u64 last) { struct timecounter *tc = &azx_dev->tc; struct cyclecounter *cc = &azx_dev->cc; @@ -523,7 +523,7 @@ void snd_hdac_stream_timecounter_init(st struct snd_pcm_runtime *runtime = azx_dev->substream->runtime; struct hdac_stream *s; bool inited = false; - cycle_t cycle_last = 0; + u64 cycle_last = 0; int i = 0; list_for_each_entry(s, &bus->stream_list, list) { --- a/virt/kvm/arm/arch_timer.c +++ b/virt/kvm/arm/arch_timer.c @@ -39,7 +39,7 @@ void kvm_timer_vcpu_put(struct kvm_vcpu vcpu->arch.timer_cpu.active_cleared_last = false; } -static cycle_t kvm_phys_timer_read(void) +static u64 kvm_phys_timer_read(void) { return timecounter->cc->read(timecounter->cc); } @@ -102,7 +102,7 @@ static void kvm_timer_inject_irq_work(st static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu) { - cycle_t cval, now; + u64 cval, now; cval = vcpu->arch.timer_cpu.cntv_cval; now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff; @@ -155,7 +155,7 @@ static bool kvm_timer_irq_can_fire(struc bool kvm_timer_should_fire(struct kvm_vcpu *vcpu) { struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu; - cycle_t cval, now; + u64 cval, now; if (!kvm_timer_irq_can_fire(vcpu)) return false; ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [patch 6/6] [RFD] timekeeping: Get rid of cycle_t 2016-12-08 20:49 ` [patch 6/6] [RFD] timekeeping: Get rid of cycle_t Thomas Gleixner @ 2016-12-08 23:43 ` David Gibson 0 siblings, 0 replies; 35+ messages in thread From: David Gibson @ 2016-12-08 23:43 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, John Stultz, Peter Zijlstra, Ingo Molnar, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall [-- Attachment #1: Type: text/plain, Size: 98896 bytes --] On Thu, Dec 08, 2016 at 08:49:45PM -0000, Thomas Gleixner wrote: > Kill the ever confusing typedef and use u64. > > NOT FOR INCLUSION - Must be regenerated at some point via coccinelle > > Not-Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Not-Reviewed-by: David Gibson <david@gibson.dropbear.id.au> The concept seems sensible to me, I haven't actually gone through and verified that your script has done what you intended. > --- > arch/alpha/kernel/time.c | 4 - > arch/arc/kernel/time.c | 12 ++-- > arch/arm/mach-davinci/time.c | 2 > arch/arm/mach-ep93xx/timer-ep93xx.c | 4 - > arch/arm/mach-footbridge/dc21285-timer.c | 2 > arch/arm/mach-ixp4xx/common.c | 2 > arch/arm/mach-mmp/time.c | 2 > arch/arm/mach-omap2/timer.c | 4 - > arch/arm/plat-iop/time.c | 2 > arch/avr32/kernel/time.c | 4 - > arch/blackfin/kernel/time-ts.c | 4 - > arch/c6x/kernel/time.c | 2 > arch/hexagon/kernel/time.c | 4 - > arch/ia64/kernel/cyclone.c | 4 - > arch/ia64/kernel/fsyscall_gtod_data.h | 6 +- > arch/ia64/kernel/time.c | 6 +- > arch/ia64/sn/kernel/sn2/timer.c | 4 - > arch/m68k/68000/timers.c | 2 > arch/m68k/coldfire/dma_timer.c | 2 > arch/m68k/coldfire/pit.c | 2 > arch/m68k/coldfire/sltimers.c | 2 > arch/m68k/coldfire/timers.c | 2 > arch/microblaze/kernel/timer.c | 6 +- > arch/mips/alchemy/common/time.c | 2 > arch/mips/cavium-octeon/csrc-octeon.c | 2 > arch/mips/jz4740/time.c | 2 > arch/mips/kernel/cevt-txx9.c | 2 > arch/mips/kernel/csrc-bcm1480.c | 4 - > arch/mips/kernel/csrc-ioasic.c | 2 > arch/mips/kernel/csrc-r4k.c | 2 > arch/mips/kernel/csrc-sb1250.c | 4 - > arch/mips/loongson32/common/time.c | 4 - > arch/mips/loongson64/common/cs5536/cs5536_mfgpt.c | 4 - > arch/mips/loongson64/loongson-3/hpet.c | 4 - > arch/mips/mti-malta/malta-time.c | 2 > arch/mips/netlogic/common/time.c | 4 - > arch/mips/sgi-ip27/ip27-timer.c | 2 > arch/mn10300/kernel/csrc-mn10300.c | 2 > arch/nios2/kernel/time.c | 2 > arch/openrisc/kernel/time.c | 4 - > arch/parisc/kernel/time.c | 2 > arch/powerpc/kernel/time.c | 14 ++--- > arch/s390/kernel/time.c | 2 > arch/sparc/kernel/time_32.c | 2 > arch/sparc/kernel/time_64.c | 2 > arch/um/kernel/time.c | 2 > arch/unicore32/kernel/time.c | 2 > arch/x86/entry/vdso/vclock_gettime.c | 8 +-- > arch/x86/include/asm/kvm_host.h | 2 > arch/x86/include/asm/pvclock.h | 6 +- > arch/x86/include/asm/tsc.h | 2 > arch/x86/include/asm/vgtod.h | 4 - > arch/x86/kernel/apb_timer.c | 4 - > arch/x86/kernel/cpu/mshyperv.c | 4 - > arch/x86/kernel/hpet.c | 14 ++--- > arch/x86/kernel/kvmclock.c | 10 ++-- > arch/x86/kernel/pvclock.c | 4 - > arch/x86/kernel/tsc.c | 6 +- > arch/x86/kvm/x86.c | 14 ++--- > arch/x86/lguest/boot.c | 2 > arch/x86/platform/uv/uv_time.c | 8 +-- > arch/x86/xen/time.c | 6 +- > arch/x86/xen/xen-ops.h | 2 > arch/xtensa/kernel/time.c | 4 - > drivers/char/hpet.c | 4 - > drivers/clocksource/acpi_pm.c | 14 ++--- > drivers/clocksource/arm_arch_timer.c | 4 - > drivers/clocksource/arm_global_timer.c | 2 > drivers/clocksource/cadence_ttc_timer.c | 4 - > drivers/clocksource/clksrc-dbx500-prcmu.c | 2 > drivers/clocksource/dw_apb_timer.c | 8 +-- > drivers/clocksource/em_sti.c | 12 ++-- > drivers/clocksource/exynos_mct.c | 6 +- > drivers/clocksource/h8300_timer16.c | 2 > drivers/clocksource/h8300_tpu.c | 2 > drivers/clocksource/i8253.c | 4 - > drivers/clocksource/jcore-pit.c | 2 > drivers/clocksource/metag_generic.c | 2 > drivers/clocksource/mips-gic-timer.c | 2 > drivers/clocksource/mmio.c | 18 +++---- > drivers/clocksource/mxs_timer.c | 2 > drivers/clocksource/qcom-timer.c | 2 > drivers/clocksource/samsung_pwm_timer.c | 2 > drivers/clocksource/scx200_hrt.c | 4 - > drivers/clocksource/sh_cmt.c | 2 > drivers/clocksource/sh_tmu.c | 2 > drivers/clocksource/tcb_clksrc.c | 4 - > drivers/clocksource/time-pistachio.c | 4 - > drivers/clocksource/timer-atlas7.c | 2 > drivers/clocksource/timer-atmel-pit.c | 2 > drivers/clocksource/timer-atmel-st.c | 2 > drivers/clocksource/timer-nps.c | 4 - > drivers/clocksource/timer-prima2.c | 2 > drivers/clocksource/timer-sun5i.c | 2 > drivers/clocksource/timer-ti-32k.c | 4 - > drivers/clocksource/vt8500_timer.c | 4 - > drivers/hv/hv.c | 8 +-- > drivers/irqchip/irq-mips-gic.c | 16 +++--- > drivers/net/ethernet/amd/xgbe/xgbe-ptp.c | 2 > drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c | 2 > drivers/net/ethernet/freescale/fec_ptp.c | 2 > drivers/net/ethernet/intel/e1000e/netdev.c | 18 +++---- > drivers/net/ethernet/intel/e1000e/ptp.c | 4 - > drivers/net/ethernet/intel/igb/igb_ptp.c | 4 - > drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c | 4 - > drivers/net/ethernet/mellanox/mlx4/en_clock.c | 2 > drivers/net/ethernet/mellanox/mlx4/main.c | 4 - > drivers/net/ethernet/mellanox/mlx5/core/en_clock.c | 2 > drivers/net/ethernet/mellanox/mlx5/core/main.c | 4 - > drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h | 2 > drivers/net/ethernet/ti/cpts.c | 2 > include/kvm/arm_arch_timer.h | 4 - > include/linux/clocksource.h | 22 ++++---- > include/linux/dw_apb_timer.h | 2 > include/linux/irqchip/mips-gic.h | 8 +-- > include/linux/mlx4/device.h | 2 > include/linux/timecounter.h | 12 ++-- > include/linux/timekeeper_internal.h | 10 ++-- > include/linux/timekeeping.h | 4 - > include/linux/types.h | 3 - > kernel/time/clocksource.c | 2 > kernel/time/jiffies.c | 4 - > kernel/time/timecounter.c | 6 +- > kernel/time/timekeeping.c | 50 ++++++++++---------- > kernel/time/timekeeping_internal.h | 6 +- > kernel/trace/ftrace.c | 4 - > kernel/trace/trace.c | 6 +- > kernel/trace/trace.h | 8 +-- > kernel/trace/trace_irqsoff.c | 4 - > kernel/trace/trace_sched_wakeup.c | 4 - > sound/hda/hdac_stream.c | 6 +- > virt/kvm/arm/arch_timer.c | 6 +- > 132 files changed, 318 insertions(+), 321 deletions(-) > > --- a/arch/alpha/kernel/time.c > +++ b/arch/alpha/kernel/time.c > @@ -133,7 +133,7 @@ init_rtc_clockevent(void) > * The QEMU clock as a clocksource primitive. > */ > > -static cycle_t > +static u64 > qemu_cs_read(struct clocksource *cs) > { > return qemu_get_vmtime(); > @@ -260,7 +260,7 @@ common_init_rtc(void) > * use this method when WTINT is in use. > */ > > -static cycle_t read_rpcc(struct clocksource *cs) > +static u64 read_rpcc(struct clocksource *cs) > { > return rpcc(); > } > --- a/arch/arc/kernel/time.c > +++ b/arch/arc/kernel/time.c > @@ -83,7 +83,7 @@ static int noinline arc_get_timer_clk(st > > #ifdef CONFIG_ARC_HAS_GFRC > > -static cycle_t arc_read_gfrc(struct clocksource *cs) > +static u64 arc_read_gfrc(struct clocksource *cs) > { > unsigned long flags; > union { > @@ -92,7 +92,7 @@ static cycle_t arc_read_gfrc(struct cloc > #else > struct { u32 l, h; }; > #endif > - cycle_t full; > + u64 full; > } stamp; > > local_irq_save(flags); > @@ -140,7 +140,7 @@ CLOCKSOURCE_OF_DECLARE(arc_gfrc, "snps,a > #define AUX_RTC_LOW 0x104 > #define AUX_RTC_HIGH 0x105 > > -static cycle_t arc_read_rtc(struct clocksource *cs) > +static u64 arc_read_rtc(struct clocksource *cs) > { > unsigned long status; > union { > @@ -149,7 +149,7 @@ static cycle_t arc_read_rtc(struct clock > #else > struct { u32 low, high; }; > #endif > - cycle_t full; > + u64 full; > } stamp; > > /* > @@ -203,9 +203,9 @@ CLOCKSOURCE_OF_DECLARE(arc_rtc, "snps,ar > * 32bit TIMER1 to keep counting monotonically and wraparound > */ > > -static cycle_t arc_read_timer1(struct clocksource *cs) > +static u64 arc_read_timer1(struct clocksource *cs) > { > - return (cycle_t) read_aux_reg(ARC_REG_TIMER1_CNT); > + return (u64) read_aux_reg(ARC_REG_TIMER1_CNT); > } > > static struct clocksource arc_counter_timer1 = { > --- a/arch/arm/mach-davinci/time.c > +++ b/arch/arm/mach-davinci/time.c > @@ -268,7 +268,7 @@ static void __init timer_init(void) > /* > * clocksource > */ > -static cycle_t read_cycles(struct clocksource *cs) > +static u64 read_cycles(struct clocksource *cs) > { > struct timer_s *t = &timers[TID_CLOCKSOURCE]; > > --- a/arch/arm/mach-ep93xx/timer-ep93xx.c > +++ b/arch/arm/mach-ep93xx/timer-ep93xx.c > @@ -59,13 +59,13 @@ static u64 notrace ep93xx_read_sched_clo > return ret; > } > > -cycle_t ep93xx_clocksource_read(struct clocksource *c) > +u64 ep93xx_clocksource_read(struct clocksource *c) > { > u64 ret; > > ret = readl(EP93XX_TIMER4_VALUE_LOW); > ret |= ((u64) (readl(EP93XX_TIMER4_VALUE_HIGH) & 0xff) << 32); > - return (cycle_t) ret; > + return ret; > } > > static int ep93xx_clkevt_set_next_event(unsigned long next, > --- a/arch/arm/mach-footbridge/dc21285-timer.c > +++ b/arch/arm/mach-footbridge/dc21285-timer.c > @@ -19,7 +19,7 @@ > > #include "common.h" > > -static cycle_t cksrc_dc21285_read(struct clocksource *cs) > +static u64 cksrc_dc21285_read(struct clocksource *cs) > { > return cs->mask - *CSR_TIMER2_VALUE; > } > --- a/arch/arm/mach-ixp4xx/common.c > +++ b/arch/arm/mach-ixp4xx/common.c > @@ -493,7 +493,7 @@ static u64 notrace ixp4xx_read_sched_clo > * clocksource > */ > > -static cycle_t ixp4xx_clocksource_read(struct clocksource *c) > +static u64 ixp4xx_clocksource_read(struct clocksource *c) > { > return *IXP4XX_OSTS; > } > --- a/arch/arm/mach-mmp/time.c > +++ b/arch/arm/mach-mmp/time.c > @@ -144,7 +144,7 @@ static struct clock_event_device ckevt = > .set_state_oneshot = timer_set_shutdown, > }; > > -static cycle_t clksrc_read(struct clocksource *cs) > +static u64 clksrc_read(struct clocksource *cs) > { > return timer_read(); > } > --- a/arch/arm/mach-omap2/timer.c > +++ b/arch/arm/mach-omap2/timer.c > @@ -369,9 +369,9 @@ static bool use_gptimer_clksrc __initdat > /* > * clocksource > */ > -static cycle_t clocksource_read_cycles(struct clocksource *cs) > +static u64 clocksource_read_cycles(struct clocksource *cs) > { > - return (cycle_t)__omap_dm_timer_read_counter(&clksrc, > + return (u64)__omap_dm_timer_read_counter(&clksrc, > OMAP_TIMER_NONPOSTED); > } > > --- a/arch/arm/plat-iop/time.c > +++ b/arch/arm/plat-iop/time.c > @@ -38,7 +38,7 @@ > /* > * IOP clocksource (free-running timer 1). > */ > -static cycle_t notrace iop_clocksource_read(struct clocksource *unused) > +static u64 notrace iop_clocksource_read(struct clocksource *unused) > { > return 0xffffffffu - read_tcr1(); > } > --- a/arch/avr32/kernel/time.c > +++ b/arch/avr32/kernel/time.c > @@ -20,9 +20,9 @@ > > static bool disable_cpu_idle_poll; > > -static cycle_t read_cycle_count(struct clocksource *cs) > +static u64 read_cycle_count(struct clocksource *cs) > { > - return (cycle_t)sysreg_read(COUNT); > + return (u64)sysreg_read(COUNT); > } > > /* > --- a/arch/blackfin/kernel/time-ts.c > +++ b/arch/blackfin/kernel/time-ts.c > @@ -26,7 +26,7 @@ > > #if defined(CONFIG_CYCLES_CLOCKSOURCE) > > -static notrace cycle_t bfin_read_cycles(struct clocksource *cs) > +static notrace u64 bfin_read_cycles(struct clocksource *cs) > { > #ifdef CONFIG_CPU_FREQ > return __bfin_cycles_off + (get_cycles() << __bfin_cycles_mod); > @@ -80,7 +80,7 @@ void __init setup_gptimer0(void) > enable_gptimers(TIMER0bit); > } > > -static cycle_t bfin_read_gptimer0(struct clocksource *cs) > +static u64 bfin_read_gptimer0(struct clocksource *cs) > { > return bfin_read_TIMER0_COUNTER(); > } > --- a/arch/c6x/kernel/time.c > +++ b/arch/c6x/kernel/time.c > @@ -26,7 +26,7 @@ > static u32 sched_clock_multiplier; > #define SCHED_CLOCK_SHIFT 16 > > -static cycle_t tsc_read(struct clocksource *cs) > +static u64 tsc_read(struct clocksource *cs) > { > return get_cycles(); > } > --- a/arch/hexagon/kernel/time.c > +++ b/arch/hexagon/kernel/time.c > @@ -72,9 +72,9 @@ struct adsp_hw_timer_struct { > /* Look for "TCX0" for related constants. */ > static __iomem struct adsp_hw_timer_struct *rtos_timer; > > -static cycle_t timer_get_cycles(struct clocksource *cs) > +static u64 timer_get_cycles(struct clocksource *cs) > { > - return (cycle_t) __vmgettime(); > + return (u64) __vmgettime(); > } > > static struct clocksource hexagon_clocksource = { > --- a/arch/ia64/kernel/cyclone.c > +++ b/arch/ia64/kernel/cyclone.c > @@ -21,9 +21,9 @@ void __init cyclone_setup(void) > > static void __iomem *cyclone_mc; > > -static cycle_t read_cyclone(struct clocksource *cs) > +static u64 read_cyclone(struct clocksource *cs) > { > - return (cycle_t)readq((void __iomem *)cyclone_mc); > + return (u64)readq((void __iomem *)cyclone_mc); > } > > static struct clocksource clocksource_cyclone = { > --- a/arch/ia64/kernel/fsyscall_gtod_data.h > +++ b/arch/ia64/kernel/fsyscall_gtod_data.h > @@ -9,15 +9,15 @@ struct fsyscall_gtod_data_t { > seqcount_t seq; > struct timespec wall_time; > struct timespec monotonic_time; > - cycle_t clk_mask; > + u64 clk_mask; > u32 clk_mult; > u32 clk_shift; > void *clk_fsys_mmio; > - cycle_t clk_cycle_last; > + u64 clk_cycle_last; > } ____cacheline_aligned; > > struct itc_jitter_data_t { > int itc_jitter; > - cycle_t itc_lastcycle; > + u64 itc_lastcycle; > } ____cacheline_aligned; > > --- a/arch/ia64/kernel/time.c > +++ b/arch/ia64/kernel/time.c > @@ -31,7 +31,7 @@ > > #include "fsyscall_gtod_data.h" > > -static cycle_t itc_get_cycles(struct clocksource *cs); > +static u64 itc_get_cycles(struct clocksource *cs); > > struct fsyscall_gtod_data_t fsyscall_gtod_data; > > @@ -323,7 +323,7 @@ void ia64_init_itm(void) > } > } > > -static cycle_t itc_get_cycles(struct clocksource *cs) > +static u64 itc_get_cycles(struct clocksource *cs) > { > unsigned long lcycle, now, ret; > > @@ -397,7 +397,7 @@ void update_vsyscall_tz(void) > } > > void update_vsyscall_old(struct timespec *wall, struct timespec *wtm, > - struct clocksource *c, u32 mult, cycle_t cycle_last) > + struct clocksource *c, u32 mult, u64 cycle_last) > { > write_seqcount_begin(&fsyscall_gtod_data.seq); > > --- a/arch/ia64/sn/kernel/sn2/timer.c > +++ b/arch/ia64/sn/kernel/sn2/timer.c > @@ -22,9 +22,9 @@ > > extern unsigned long sn_rtc_cycles_per_second; > > -static cycle_t read_sn2(struct clocksource *cs) > +static u64 read_sn2(struct clocksource *cs) > { > - return (cycle_t)readq(RTC_COUNTER_ADDR); > + return (u64)readq(RTC_COUNTER_ADDR); > } > > static struct clocksource clocksource_sn2 = { > --- a/arch/m68k/68000/timers.c > +++ b/arch/m68k/68000/timers.c > @@ -76,7 +76,7 @@ static struct irqaction m68328_timer_irq > > /***************************************************************************/ > > -static cycle_t m68328_read_clk(struct clocksource *cs) > +static u64 m68328_read_clk(struct clocksource *cs) > { > unsigned long flags; > u32 cycles; > --- a/arch/m68k/coldfire/dma_timer.c > +++ b/arch/m68k/coldfire/dma_timer.c > @@ -34,7 +34,7 @@ > #define DMA_DTMR_CLK_DIV_16 (2 << 1) > #define DMA_DTMR_ENABLE (1 << 0) > > -static cycle_t cf_dt_get_cycles(struct clocksource *cs) > +static u64 cf_dt_get_cycles(struct clocksource *cs) > { > return __raw_readl(DTCN0); > } > --- a/arch/m68k/coldfire/pit.c > +++ b/arch/m68k/coldfire/pit.c > @@ -118,7 +118,7 @@ static struct irqaction pit_irq = { > > /***************************************************************************/ > > -static cycle_t pit_read_clk(struct clocksource *cs) > +static u64 pit_read_clk(struct clocksource *cs) > { > unsigned long flags; > u32 cycles; > --- a/arch/m68k/coldfire/sltimers.c > +++ b/arch/m68k/coldfire/sltimers.c > @@ -97,7 +97,7 @@ static struct irqaction mcfslt_timer_irq > .handler = mcfslt_tick, > }; > > -static cycle_t mcfslt_read_clk(struct clocksource *cs) > +static u64 mcfslt_read_clk(struct clocksource *cs) > { > unsigned long flags; > u32 cycles, scnt; > --- a/arch/m68k/coldfire/timers.c > +++ b/arch/m68k/coldfire/timers.c > @@ -89,7 +89,7 @@ static struct irqaction mcftmr_timer_irq > > /***************************************************************************/ > > -static cycle_t mcftmr_read_clk(struct clocksource *cs) > +static u64 mcftmr_read_clk(struct clocksource *cs) > { > unsigned long flags; > u32 cycles; > --- a/arch/microblaze/kernel/timer.c > +++ b/arch/microblaze/kernel/timer.c > @@ -190,17 +190,17 @@ static u64 xilinx_clock_read(void) > return read_fn(timer_baseaddr + TCR1); > } > > -static cycle_t xilinx_read(struct clocksource *cs) > +static u64 xilinx_read(struct clocksource *cs) > { > /* reading actual value of timer 1 */ > - return (cycle_t)xilinx_clock_read(); > + return (u64)xilinx_clock_read(); > } > > static struct timecounter xilinx_tc = { > .cc = NULL, > }; > > -static cycle_t xilinx_cc_read(const struct cyclecounter *cc) > +static u64 xilinx_cc_read(const struct cyclecounter *cc) > { > return xilinx_read(NULL); > } > --- a/arch/mips/alchemy/common/time.c > +++ b/arch/mips/alchemy/common/time.c > @@ -44,7 +44,7 @@ > /* 32kHz clock enabled and detected */ > #define CNTR_OK (SYS_CNTRL_E0 | SYS_CNTRL_32S) > > -static cycle_t au1x_counter1_read(struct clocksource *cs) > +static u64 au1x_counter1_read(struct clocksource *cs) > { > return alchemy_rdsys(AU1000_SYS_RTCREAD); > } > --- a/arch/mips/cavium-octeon/csrc-octeon.c > +++ b/arch/mips/cavium-octeon/csrc-octeon.c > @@ -98,7 +98,7 @@ void octeon_init_cvmcount(void) > local_irq_restore(flags); > } > > -static cycle_t octeon_cvmcount_read(struct clocksource *cs) > +static u64 octeon_cvmcount_read(struct clocksource *cs) > { > return read_c0_cvmcount(); > } > --- a/arch/mips/jz4740/time.c > +++ b/arch/mips/jz4740/time.c > @@ -34,7 +34,7 @@ > > static uint16_t jz4740_jiffies_per_tick; > > -static cycle_t jz4740_clocksource_read(struct clocksource *cs) > +static u64 jz4740_clocksource_read(struct clocksource *cs) > { > return jz4740_timer_get_count(TIMER_CLOCKSOURCE); > } > --- a/arch/mips/kernel/cevt-txx9.c > +++ b/arch/mips/kernel/cevt-txx9.c > @@ -27,7 +27,7 @@ struct txx9_clocksource { > struct txx9_tmr_reg __iomem *tmrptr; > }; > > -static cycle_t txx9_cs_read(struct clocksource *cs) > +static u64 txx9_cs_read(struct clocksource *cs) > { > struct txx9_clocksource *txx9_cs = > container_of(cs, struct txx9_clocksource, cs); > --- a/arch/mips/kernel/csrc-bcm1480.c > +++ b/arch/mips/kernel/csrc-bcm1480.c > @@ -25,9 +25,9 @@ > > #include <asm/sibyte/sb1250.h> > > -static cycle_t bcm1480_hpt_read(struct clocksource *cs) > +static u64 bcm1480_hpt_read(struct clocksource *cs) > { > - return (cycle_t) __raw_readq(IOADDR(A_SCD_ZBBUS_CYCLE_COUNT)); > + return (u64) __raw_readq(IOADDR(A_SCD_ZBBUS_CYCLE_COUNT)); > } > > struct clocksource bcm1480_clocksource = { > --- a/arch/mips/kernel/csrc-ioasic.c > +++ b/arch/mips/kernel/csrc-ioasic.c > @@ -22,7 +22,7 @@ > #include <asm/dec/ioasic.h> > #include <asm/dec/ioasic_addrs.h> > > -static cycle_t dec_ioasic_hpt_read(struct clocksource *cs) > +static u64 dec_ioasic_hpt_read(struct clocksource *cs) > { > return ioasic_read(IO_REG_FCTR); > } > --- a/arch/mips/kernel/csrc-r4k.c > +++ b/arch/mips/kernel/csrc-r4k.c > @@ -11,7 +11,7 @@ > > #include <asm/time.h> > > -static cycle_t c0_hpt_read(struct clocksource *cs) > +static u64 c0_hpt_read(struct clocksource *cs) > { > return read_c0_count(); > } > --- a/arch/mips/kernel/csrc-sb1250.c > +++ b/arch/mips/kernel/csrc-sb1250.c > @@ -30,7 +30,7 @@ > * The HPT is free running from SB1250_HPT_VALUE down to 0 then starts over > * again. > */ > -static inline cycle_t sb1250_hpt_get_cycles(void) > +static inline u64 sb1250_hpt_get_cycles(void) > { > unsigned int count; > void __iomem *addr; > @@ -41,7 +41,7 @@ static inline cycle_t sb1250_hpt_get_cyc > return SB1250_HPT_VALUE - count; > } > > -static cycle_t sb1250_hpt_read(struct clocksource *cs) > +static u64 sb1250_hpt_read(struct clocksource *cs) > { > return sb1250_hpt_get_cycles(); > } > --- a/arch/mips/loongson32/common/time.c > +++ b/arch/mips/loongson32/common/time.c > @@ -63,7 +63,7 @@ void __init ls1x_pwmtimer_init(void) > ls1x_pwmtimer_restart(); > } > > -static cycle_t ls1x_clocksource_read(struct clocksource *cs) > +static u64 ls1x_clocksource_read(struct clocksource *cs) > { > unsigned long flags; > int count; > @@ -107,7 +107,7 @@ static cycle_t ls1x_clocksource_read(str > > raw_spin_unlock_irqrestore(&ls1x_timer_lock, flags); > > - return (cycle_t) (jifs * ls1x_jiffies_per_tick) + count; > + return (u64) (jifs * ls1x_jiffies_per_tick) + count; > } > > static struct clocksource ls1x_clocksource = { > --- a/arch/mips/loongson64/common/cs5536/cs5536_mfgpt.c > +++ b/arch/mips/loongson64/common/cs5536/cs5536_mfgpt.c > @@ -144,7 +144,7 @@ void __init setup_mfgpt0_timer(void) > * to just read by itself. So use jiffies to emulate a free > * running counter: > */ > -static cycle_t mfgpt_read(struct clocksource *cs) > +static u64 mfgpt_read(struct clocksource *cs) > { > unsigned long flags; > int count; > @@ -188,7 +188,7 @@ static cycle_t mfgpt_read(struct clockso > > raw_spin_unlock_irqrestore(&mfgpt_lock, flags); > > - return (cycle_t) (jifs * COMPARE) + count; > + return (u64) (jifs * COMPARE) + count; > } > > static struct clocksource clocksource_mfgpt = { > --- a/arch/mips/loongson64/loongson-3/hpet.c > +++ b/arch/mips/loongson64/loongson-3/hpet.c > @@ -248,9 +248,9 @@ void __init setup_hpet_timer(void) > pr_info("hpet clock event device register\n"); > } > > -static cycle_t hpet_read_counter(struct clocksource *cs) > +static u64 hpet_read_counter(struct clocksource *cs) > { > - return (cycle_t)hpet_read(HPET_COUNTER); > + return (u64)hpet_read(HPET_COUNTER); > } > > static void hpet_suspend(struct clocksource *cs) > --- a/arch/mips/mti-malta/malta-time.c > +++ b/arch/mips/mti-malta/malta-time.c > @@ -75,7 +75,7 @@ static void __init estimate_frequencies( > unsigned int count, start; > unsigned char secs1, secs2, ctrl; > int secs; > - cycle_t giccount = 0, gicstart = 0; > + u64 giccount = 0, gicstart = 0; > > #if defined(CONFIG_KVM_GUEST) && CONFIG_KVM_GUEST_TIMER_FREQ > mips_hpt_frequency = CONFIG_KVM_GUEST_TIMER_FREQ * 1000000; > --- a/arch/mips/netlogic/common/time.c > +++ b/arch/mips/netlogic/common/time.c > @@ -59,14 +59,14 @@ unsigned int get_c0_compare_int(void) > return IRQ_TIMER; > } > > -static cycle_t nlm_get_pic_timer(struct clocksource *cs) > +static u64 nlm_get_pic_timer(struct clocksource *cs) > { > uint64_t picbase = nlm_get_node(0)->picbase; > > return ~nlm_pic_read_timer(picbase, PIC_CLOCK_TIMER); > } > > -static cycle_t nlm_get_pic_timer32(struct clocksource *cs) > +static u64 nlm_get_pic_timer32(struct clocksource *cs) > { > uint64_t picbase = nlm_get_node(0)->picbase; > > --- a/arch/mips/sgi-ip27/ip27-timer.c > +++ b/arch/mips/sgi-ip27/ip27-timer.c > @@ -140,7 +140,7 @@ static void __init hub_rt_clock_event_gl > setup_irq(irq, &hub_rt_irqaction); > } > > -static cycle_t hub_rt_read(struct clocksource *cs) > +static u64 hub_rt_read(struct clocksource *cs) > { > return REMOTE_HUB_L(cputonasid(0), PI_RT_COUNT); > } > --- a/arch/mn10300/kernel/csrc-mn10300.c > +++ b/arch/mn10300/kernel/csrc-mn10300.c > @@ -13,7 +13,7 @@ > #include <asm/timex.h> > #include "internal.h" > > -static cycle_t mn10300_read(struct clocksource *cs) > +static u64 mn10300_read(struct clocksource *cs) > { > return read_timestamp_counter(); > } > --- a/arch/nios2/kernel/time.c > +++ b/arch/nios2/kernel/time.c > @@ -81,7 +81,7 @@ static inline unsigned long read_timersn > return count; > } > > -static cycle_t nios2_timer_read(struct clocksource *cs) > +static u64 nios2_timer_read(struct clocksource *cs) > { > struct nios2_clocksource *nios2_cs = to_nios2_clksource(cs); > unsigned long flags; > --- a/arch/openrisc/kernel/time.c > +++ b/arch/openrisc/kernel/time.c > @@ -117,9 +117,9 @@ static __init void openrisc_clockevent_i > * is 32 bits wide and runs at the CPU clock frequency. > */ > > -static cycle_t openrisc_timer_read(struct clocksource *cs) > +static u64 openrisc_timer_read(struct clocksource *cs) > { > - return (cycle_t) mfspr(SPR_TTCR); > + return (u64) mfspr(SPR_TTCR); > } > > static struct clocksource openrisc_timer = { > --- a/arch/parisc/kernel/time.c > +++ b/arch/parisc/kernel/time.c > @@ -191,7 +191,7 @@ EXPORT_SYMBOL(profile_pc); > > /* clock source code */ > > -static cycle_t notrace read_cr16(struct clocksource *cs) > +static u64 notrace read_cr16(struct clocksource *cs) > { > return get_cycles(); > } > --- a/arch/powerpc/kernel/time.c > +++ b/arch/powerpc/kernel/time.c > @@ -80,7 +80,7 @@ > #include <linux/clockchips.h> > #include <linux/timekeeper_internal.h> > > -static cycle_t rtc_read(struct clocksource *); > +static u64 rtc_read(struct clocksource *); > static struct clocksource clocksource_rtc = { > .name = "rtc", > .rating = 400, > @@ -89,7 +89,7 @@ static struct clocksource clocksource_rt > .read = rtc_read, > }; > > -static cycle_t timebase_read(struct clocksource *); > +static u64 timebase_read(struct clocksource *); > static struct clocksource clocksource_timebase = { > .name = "timebase", > .rating = 400, > @@ -802,18 +802,18 @@ void read_persistent_clock(struct timesp > } > > /* clocksource code */ > -static cycle_t rtc_read(struct clocksource *cs) > +static u64 rtc_read(struct clocksource *cs) > { > - return (cycle_t)get_rtc(); > + return (u64)get_rtc(); > } > > -static cycle_t timebase_read(struct clocksource *cs) > +static u64 timebase_read(struct clocksource *cs) > { > - return (cycle_t)get_tb(); > + return (u64)get_tb(); > } > > void update_vsyscall_old(struct timespec *wall_time, struct timespec *wtm, > - struct clocksource *clock, u32 mult, cycle_t cycle_last) > + struct clocksource *clock, u32 mult, u64 cycle_last) > { > u64 new_tb_to_xs, new_stamp_xsec; > u32 frac_sec; > --- a/arch/s390/kernel/time.c > +++ b/arch/s390/kernel/time.c > @@ -213,7 +213,7 @@ void read_boot_clock64(struct timespec64 > tod_to_timeval(clock - TOD_UNIX_EPOCH, ts); > } > > -static cycle_t read_tod_clock(struct clocksource *cs) > +static u64 read_tod_clock(struct clocksource *cs) > { > return get_tod_clock(); > } > --- a/arch/sparc/kernel/time_32.c > +++ b/arch/sparc/kernel/time_32.c > @@ -148,7 +148,7 @@ static unsigned int sbus_cycles_offset(v > return offset; > } > > -static cycle_t timer_cs_read(struct clocksource *cs) > +static u64 timer_cs_read(struct clocksource *cs) > { > unsigned int seq, offset; > u64 cycles; > --- a/arch/sparc/kernel/time_64.c > +++ b/arch/sparc/kernel/time_64.c > @@ -770,7 +770,7 @@ void udelay(unsigned long usecs) > } > EXPORT_SYMBOL(udelay); > > -static cycle_t clocksource_tick_read(struct clocksource *cs) > +static u64 clocksource_tick_read(struct clocksource *cs) > { > return tick_ops->get_tick(); > } > --- a/arch/um/kernel/time.c > +++ b/arch/um/kernel/time.c > @@ -83,7 +83,7 @@ static irqreturn_t um_timer(int irq, voi > return IRQ_HANDLED; > } > > -static cycle_t timer_read(struct clocksource *cs) > +static u64 timer_read(struct clocksource *cs) > { > return os_nsecs() / TIMER_MULTIPLIER; > } > --- a/arch/unicore32/kernel/time.c > +++ b/arch/unicore32/kernel/time.c > @@ -62,7 +62,7 @@ static struct clock_event_device ckevt_p > .set_state_oneshot = puv3_osmr0_shutdown, > }; > > -static cycle_t puv3_read_oscr(struct clocksource *cs) > +static u64 puv3_read_oscr(struct clocksource *cs) > { > return readl(OST_OSCR); > } > --- a/arch/x86/entry/vdso/vclock_gettime.c > +++ b/arch/x86/entry/vdso/vclock_gettime.c > @@ -92,10 +92,10 @@ static notrace const struct pvclock_vsys > return (const struct pvclock_vsyscall_time_info *)&pvclock_page; > } > > -static notrace cycle_t vread_pvclock(int *mode) > +static notrace u64 vread_pvclock(int *mode) > { > const struct pvclock_vcpu_time_info *pvti = &get_pvti0()->pvti; > - cycle_t ret; > + u64 ret; > u64 last; > u32 version; > > @@ -142,9 +142,9 @@ static notrace cycle_t vread_pvclock(int > } > #endif > > -notrace static cycle_t vread_tsc(void) > +notrace static u64 vread_tsc(void) > { > - cycle_t ret = (cycle_t)rdtsc_ordered(); > + u64 ret = (u64)rdtsc_ordered(); > u64 last = gtod->cycle_last; > > if (likely(ret >= last)) > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -758,7 +758,7 @@ struct kvm_arch { > spinlock_t pvclock_gtod_sync_lock; > bool use_master_clock; > u64 master_kernel_ns; > - cycle_t master_cycle_now; > + u64 master_cycle_now; > struct delayed_work kvmclock_update_work; > struct delayed_work kvmclock_sync_work; > > --- a/arch/x86/include/asm/pvclock.h > +++ b/arch/x86/include/asm/pvclock.h > @@ -14,7 +14,7 @@ static inline struct pvclock_vsyscall_ti > #endif > > /* some helper functions for xen and kvm pv clock sources */ > -cycle_t pvclock_clocksource_read(struct pvclock_vcpu_time_info *src); > +u64 pvclock_clocksource_read(struct pvclock_vcpu_time_info *src); > u8 pvclock_read_flags(struct pvclock_vcpu_time_info *src); > void pvclock_set_flags(u8 flags); > unsigned long pvclock_tsc_khz(struct pvclock_vcpu_time_info *src); > @@ -87,11 +87,11 @@ static inline u64 pvclock_scale_delta(u6 > } > > static __always_inline > -cycle_t __pvclock_read_cycles(const struct pvclock_vcpu_time_info *src, > +u64 __pvclock_read_cycles(const struct pvclock_vcpu_time_info *src, > u64 tsc) > { > u64 delta = tsc - src->tsc_timestamp; > - cycle_t offset = pvclock_scale_delta(delta, src->tsc_to_system_mul, > + u64 offset = pvclock_scale_delta(delta, src->tsc_to_system_mul, > src->tsc_shift); > return src->system_time + offset; > } > --- a/arch/x86/include/asm/tsc.h > +++ b/arch/x86/include/asm/tsc.h > @@ -29,7 +29,7 @@ static inline cycles_t get_cycles(void) > return rdtsc(); > } > > -extern struct system_counterval_t convert_art_to_tsc(cycle_t art); > +extern struct system_counterval_t convert_art_to_tsc(u64 art); > > extern void tsc_init(void); > extern void mark_tsc_unstable(char *reason); > --- a/arch/x86/include/asm/vgtod.h > +++ b/arch/x86/include/asm/vgtod.h > @@ -17,8 +17,8 @@ struct vsyscall_gtod_data { > unsigned seq; > > int vclock_mode; > - cycle_t cycle_last; > - cycle_t mask; > + u64 cycle_last; > + u64 mask; > u32 mult; > u32 shift; > > --- a/arch/x86/kernel/apb_timer.c > +++ b/arch/x86/kernel/apb_timer.c > @@ -247,7 +247,7 @@ void apbt_setup_secondary_clock(void) {} > static int apbt_clocksource_register(void) > { > u64 start, now; > - cycle_t t1; > + u64 t1; > > /* Start the counter, use timer 2 as source, timer 0/1 for event */ > dw_apb_clocksource_start(clocksource_apbt); > @@ -355,7 +355,7 @@ unsigned long apbt_quick_calibrate(void) > { > int i, scale; > u64 old, new; > - cycle_t t1, t2; > + u64 t1, t2; > unsigned long khz = 0; > u32 loop, shift; > > --- a/arch/x86/kernel/cpu/mshyperv.c > +++ b/arch/x86/kernel/cpu/mshyperv.c > @@ -133,9 +133,9 @@ static uint32_t __init ms_hyperv_platfo > return 0; > } > > -static cycle_t read_hv_clock(struct clocksource *arg) > +static u64 read_hv_clock(struct clocksource *arg) > { > - cycle_t current_tick; > + u64 current_tick; > /* > * Read the partition counter to get the current tick count. This count > * is set to 0 when the partition is created and is incremented in > --- a/arch/x86/kernel/hpet.c > +++ b/arch/x86/kernel/hpet.c > @@ -791,7 +791,7 @@ static union hpet_lock hpet __cacheline_ > { .lock = __ARCH_SPIN_LOCK_UNLOCKED, }, > }; > > -static cycle_t read_hpet(struct clocksource *cs) > +static u64 read_hpet(struct clocksource *cs) > { > unsigned long flags; > union hpet_lock old, new; > @@ -802,7 +802,7 @@ static cycle_t read_hpet(struct clocksou > * Read HPET directly if in NMI. > */ > if (in_nmi()) > - return (cycle_t)hpet_readl(HPET_COUNTER); > + return (u64)hpet_readl(HPET_COUNTER); > > /* > * Read the current state of the lock and HPET value atomically. > @@ -821,7 +821,7 @@ static cycle_t read_hpet(struct clocksou > WRITE_ONCE(hpet.value, new.value); > arch_spin_unlock(&hpet.lock); > local_irq_restore(flags); > - return (cycle_t)new.value; > + return (u64)new.value; > } > local_irq_restore(flags); > > @@ -843,15 +843,15 @@ static cycle_t read_hpet(struct clocksou > new.lockval = READ_ONCE(hpet.lockval); > } while ((new.value == old.value) && arch_spin_is_locked(&new.lock)); > > - return (cycle_t)new.value; > + return (u64)new.value; > } > #else > /* > * For UP or 32-bit. > */ > -static cycle_t read_hpet(struct clocksource *cs) > +static u64 read_hpet(struct clocksource *cs) > { > - return (cycle_t)hpet_readl(HPET_COUNTER); > + return (u64)hpet_readl(HPET_COUNTER); > } > #endif > > @@ -867,7 +867,7 @@ static struct clocksource clocksource_hp > static int hpet_clocksource_register(void) > { > u64 start, now; > - cycle_t t1; > + u64 t1; > > /* Start the counter */ > hpet_restart_counter(); > --- a/arch/x86/kernel/kvmclock.c > +++ b/arch/x86/kernel/kvmclock.c > @@ -32,7 +32,7 @@ > static int kvmclock __ro_after_init = 1; > static int msr_kvm_system_time = MSR_KVM_SYSTEM_TIME; > static int msr_kvm_wall_clock = MSR_KVM_WALL_CLOCK; > -static cycle_t kvm_sched_clock_offset; > +static u64 kvm_sched_clock_offset; > > static int parse_no_kvmclock(char *arg) > { > @@ -79,10 +79,10 @@ static int kvm_set_wallclock(const struc > return -1; > } > > -static cycle_t kvm_clock_read(void) > +static u64 kvm_clock_read(void) > { > struct pvclock_vcpu_time_info *src; > - cycle_t ret; > + u64 ret; > int cpu; > > preempt_disable_notrace(); > @@ -93,12 +93,12 @@ static cycle_t kvm_clock_read(void) > return ret; > } > > -static cycle_t kvm_clock_get_cycles(struct clocksource *cs) > +static u64 kvm_clock_get_cycles(struct clocksource *cs) > { > return kvm_clock_read(); > } > > -static cycle_t kvm_sched_clock_read(void) > +static u64 kvm_sched_clock_read(void) > { > return kvm_clock_read() - kvm_sched_clock_offset; > } > --- a/arch/x86/kernel/pvclock.c > +++ b/arch/x86/kernel/pvclock.c > @@ -71,10 +71,10 @@ u8 pvclock_read_flags(struct pvclock_vcp > return flags & valid_flags; > } > > -cycle_t pvclock_clocksource_read(struct pvclock_vcpu_time_info *src) > +u64 pvclock_clocksource_read(struct pvclock_vcpu_time_info *src) > { > unsigned version; > - cycle_t ret; > + u64 ret; > u64 last; > u8 flags; > > --- a/arch/x86/kernel/tsc.c > +++ b/arch/x86/kernel/tsc.c > @@ -1080,9 +1080,9 @@ static struct clocksource clocksource_ts > * checking the result of read_tsc() - cycle_last for being negative. > * That works because CLOCKSOURCE_MASK(64) does not mask out any bit. > */ > -static cycle_t read_tsc(struct clocksource *cs) > +static u64 read_tsc(struct clocksource *cs) > { > - return (cycle_t)rdtsc_ordered(); > + return (u64)rdtsc_ordered(); > } > > /* > @@ -1170,7 +1170,7 @@ int unsynchronized_tsc(void) > /* > * Convert ART to TSC given numerator/denominator found in detect_art() > */ > -struct system_counterval_t convert_art_to_tsc(cycle_t art) > +struct system_counterval_t convert_art_to_tsc(u64 art) > { > u64 tmp, res, rem; > > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -1128,8 +1128,8 @@ struct pvclock_gtod_data { > > struct { /* extract of a clocksource struct */ > int vclock_mode; > - cycle_t cycle_last; > - cycle_t mask; > + u64 cycle_last; > + u64 mask; > u32 mult; > u32 shift; > } clock; > @@ -1569,9 +1569,9 @@ static inline void adjust_tsc_offset_hos > > #ifdef CONFIG_X86_64 > > -static cycle_t read_tsc(void) > +static u64 read_tsc(void) > { > - cycle_t ret = (cycle_t)rdtsc_ordered(); > + u64 ret = (u64)rdtsc_ordered(); > u64 last = pvclock_gtod_data.clock.cycle_last; > > if (likely(ret >= last)) > @@ -1589,7 +1589,7 @@ static cycle_t read_tsc(void) > return last; > } > > -static inline u64 vgettsc(cycle_t *cycle_now) > +static inline u64 vgettsc(u64 *cycle_now) > { > long v; > struct pvclock_gtod_data *gtod = &pvclock_gtod_data; > @@ -1600,7 +1600,7 @@ static inline u64 vgettsc(cycle_t *cycle > return v * gtod->clock.mult; > } > > -static int do_monotonic_boot(s64 *t, cycle_t *cycle_now) > +static int do_monotonic_boot(s64 *t, u64 *cycle_now) > { > struct pvclock_gtod_data *gtod = &pvclock_gtod_data; > unsigned long seq; > @@ -1621,7 +1621,7 @@ static int do_monotonic_boot(s64 *t, cyc > } > > /* returns true if host is using tsc clocksource */ > -static bool kvm_get_time_and_clockread(s64 *kernel_ns, cycle_t *cycle_now) > +static bool kvm_get_time_and_clockread(s64 *kernel_ns, u64 *cycle_now) > { > /* checked again under seqlock below */ > if (pvclock_gtod_data.clock.vclock_mode != VCLOCK_TSC) > --- a/arch/x86/lguest/boot.c > +++ b/arch/x86/lguest/boot.c > @@ -930,7 +930,7 @@ static unsigned long lguest_tsc_khz(void > * If we can't use the TSC, the kernel falls back to our lower-priority > * "lguest_clock", where we read the time value given to us by the Host. > */ > -static cycle_t lguest_clock_read(struct clocksource *cs) > +static u64 lguest_clock_read(struct clocksource *cs) > { > unsigned long sec, nsec; > > --- a/arch/x86/platform/uv/uv_time.c > +++ b/arch/x86/platform/uv/uv_time.c > @@ -30,7 +30,7 @@ > > #define RTC_NAME "sgi_rtc" > > -static cycle_t uv_read_rtc(struct clocksource *cs); > +static u64 uv_read_rtc(struct clocksource *cs); > static int uv_rtc_next_event(unsigned long, struct clock_event_device *); > static int uv_rtc_shutdown(struct clock_event_device *evt); > > @@ -38,7 +38,7 @@ static struct clocksource clocksource_uv > .name = RTC_NAME, > .rating = 299, > .read = uv_read_rtc, > - .mask = (cycle_t)UVH_RTC_REAL_TIME_CLOCK_MASK, > + .mask = (u64)UVH_RTC_REAL_TIME_CLOCK_MASK, > .flags = CLOCK_SOURCE_IS_CONTINUOUS, > }; > > @@ -296,7 +296,7 @@ static int uv_rtc_unset_timer(int cpu, i > * cachelines of it's own page. This allows faster simultaneous reads > * from a given socket. > */ > -static cycle_t uv_read_rtc(struct clocksource *cs) > +static u64 uv_read_rtc(struct clocksource *cs) > { > unsigned long offset; > > @@ -305,7 +305,7 @@ static cycle_t uv_read_rtc(struct clocks > else > offset = (uv_blade_processor_id() * L1_CACHE_BYTES) % PAGE_SIZE; > > - return (cycle_t)uv_read_local_mmr(UVH_RTC | offset); > + return (u64)uv_read_local_mmr(UVH_RTC | offset); > } > > /* > --- a/arch/x86/xen/time.c > +++ b/arch/x86/xen/time.c > @@ -39,10 +39,10 @@ static unsigned long xen_tsc_khz(void) > return pvclock_tsc_khz(info); > } > > -cycle_t xen_clocksource_read(void) > +u64 xen_clocksource_read(void) > { > struct pvclock_vcpu_time_info *src; > - cycle_t ret; > + u64 ret; > > preempt_disable_notrace(); > src = &__this_cpu_read(xen_vcpu)->time; > @@ -51,7 +51,7 @@ cycle_t xen_clocksource_read(void) > return ret; > } > > -static cycle_t xen_clocksource_get_cycles(struct clocksource *cs) > +static u64 xen_clocksource_get_cycles(struct clocksource *cs) > { > return xen_clocksource_read(); > } > --- a/arch/x86/xen/xen-ops.h > +++ b/arch/x86/xen/xen-ops.h > @@ -67,7 +67,7 @@ void xen_init_irq_ops(void); > void xen_setup_timer(int cpu); > void xen_setup_runstate_info(int cpu); > void xen_teardown_timer(int cpu); > -cycle_t xen_clocksource_read(void); > +u64 xen_clocksource_read(void); > void xen_setup_cpu_clockevents(void); > void __init xen_init_time_ops(void); > void __init xen_hvm_init_time_ops(void); > --- a/arch/xtensa/kernel/time.c > +++ b/arch/xtensa/kernel/time.c > @@ -34,9 +34,9 @@ > unsigned long ccount_freq; /* ccount Hz */ > EXPORT_SYMBOL(ccount_freq); > > -static cycle_t ccount_read(struct clocksource *cs) > +static u64 ccount_read(struct clocksource *cs) > { > - return (cycle_t)get_ccount(); > + return (u64)get_ccount(); > } > > static u64 notrace ccount_sched_clock_read(void) > --- a/drivers/char/hpet.c > +++ b/drivers/char/hpet.c > @@ -69,9 +69,9 @@ static u32 hpet_nhpet, hpet_max_freq = H > #ifdef CONFIG_IA64 > static void __iomem *hpet_mctr; > > -static cycle_t read_hpet(struct clocksource *cs) > +static u64 read_hpet(struct clocksource *cs) > { > - return (cycle_t)read_counter((void __iomem *)hpet_mctr); > + return (u64)read_counter((void __iomem *)hpet_mctr); > } > > static struct clocksource clocksource_hpet = { > --- a/drivers/clocksource/acpi_pm.c > +++ b/drivers/clocksource/acpi_pm.c > @@ -58,16 +58,16 @@ u32 acpi_pm_read_verified(void) > return v2; > } > > -static cycle_t acpi_pm_read(struct clocksource *cs) > +static u64 acpi_pm_read(struct clocksource *cs) > { > - return (cycle_t)read_pmtmr(); > + return (u64)read_pmtmr(); > } > > static struct clocksource clocksource_acpi_pm = { > .name = "acpi_pm", > .rating = 200, > .read = acpi_pm_read, > - .mask = (cycle_t)ACPI_PM_MASK, > + .mask = (u64)ACPI_PM_MASK, > .flags = CLOCK_SOURCE_IS_CONTINUOUS, > }; > > @@ -81,9 +81,9 @@ static int __init acpi_pm_good_setup(cha > } > __setup("acpi_pm_good", acpi_pm_good_setup); > > -static cycle_t acpi_pm_read_slow(struct clocksource *cs) > +static u64 acpi_pm_read_slow(struct clocksource *cs) > { > - return (cycle_t)acpi_pm_read_verified(); > + return (u64)acpi_pm_read_verified(); > } > > static inline void acpi_pm_need_workaround(void) > @@ -145,7 +145,7 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SE > */ > static int verify_pmtmr_rate(void) > { > - cycle_t value1, value2; > + u64 value1, value2; > unsigned long count, delta; > > mach_prepare_counter(); > @@ -175,7 +175,7 @@ static int verify_pmtmr_rate(void) > > static int __init init_acpi_pm_clocksource(void) > { > - cycle_t value1, value2; > + u64 value1, value2; > unsigned int i, j = 0; > > if (!pmtmr_ioport) > --- a/drivers/clocksource/arm_arch_timer.c > +++ b/drivers/clocksource/arm_arch_timer.c > @@ -561,12 +561,12 @@ static u64 arch_counter_get_cntvct_mem(v > */ > u64 (*arch_timer_read_counter)(void) = arch_counter_get_cntvct; > > -static cycle_t arch_counter_read(struct clocksource *cs) > +static u64 arch_counter_read(struct clocksource *cs) > { > return arch_timer_read_counter(); > } > > -static cycle_t arch_counter_read_cc(const struct cyclecounter *cc) > +static u64 arch_counter_read_cc(const struct cyclecounter *cc) > { > return arch_timer_read_counter(); > } > --- a/drivers/clocksource/arm_global_timer.c > +++ b/drivers/clocksource/arm_global_timer.c > @@ -195,7 +195,7 @@ static int gt_dying_cpu(unsigned int cpu > return 0; > } > > -static cycle_t gt_clocksource_read(struct clocksource *cs) > +static u64 gt_clocksource_read(struct clocksource *cs) > { > return gt_counter_read(); > } > --- a/drivers/clocksource/cadence_ttc_timer.c > +++ b/drivers/clocksource/cadence_ttc_timer.c > @@ -158,11 +158,11 @@ static irqreturn_t ttc_clock_event_inter > * > * returns: Current timer counter register value > **/ > -static cycle_t __ttc_clocksource_read(struct clocksource *cs) > +static u64 __ttc_clocksource_read(struct clocksource *cs) > { > struct ttc_timer *timer = &to_ttc_timer_clksrc(cs)->ttc; > > - return (cycle_t)readl_relaxed(timer->base_addr + > + return (u64)readl_relaxed(timer->base_addr + > TTC_COUNT_VAL_OFFSET); > } > > --- a/drivers/clocksource/clksrc-dbx500-prcmu.c > +++ b/drivers/clocksource/clksrc-dbx500-prcmu.c > @@ -30,7 +30,7 @@ > > static void __iomem *clksrc_dbx500_timer_base; > > -static cycle_t notrace clksrc_dbx500_prcmu_read(struct clocksource *cs) > +static u64 notrace clksrc_dbx500_prcmu_read(struct clocksource *cs) > { > void __iomem *base = clksrc_dbx500_timer_base; > u32 count, count2; > --- a/drivers/clocksource/dw_apb_timer.c > +++ b/drivers/clocksource/dw_apb_timer.c > @@ -348,7 +348,7 @@ void dw_apb_clocksource_start(struct dw_ > dw_apb_clocksource_read(dw_cs); > } > > -static cycle_t __apbt_read_clocksource(struct clocksource *cs) > +static u64 __apbt_read_clocksource(struct clocksource *cs) > { > u32 current_count; > struct dw_apb_clocksource *dw_cs = > @@ -357,7 +357,7 @@ static cycle_t __apbt_read_clocksource(s > current_count = apbt_readl_relaxed(&dw_cs->timer, > APBTMR_N_CURRENT_VALUE); > > - return (cycle_t)~current_count; > + return (u64)~current_count; > } > > static void apbt_restart_clocksource(struct clocksource *cs) > @@ -416,7 +416,7 @@ void dw_apb_clocksource_register(struct > * > * @dw_cs: The clocksource to read. > */ > -cycle_t dw_apb_clocksource_read(struct dw_apb_clocksource *dw_cs) > +u64 dw_apb_clocksource_read(struct dw_apb_clocksource *dw_cs) > { > - return (cycle_t)~apbt_readl(&dw_cs->timer, APBTMR_N_CURRENT_VALUE); > + return (u64)~apbt_readl(&dw_cs->timer, APBTMR_N_CURRENT_VALUE); > } > --- a/drivers/clocksource/em_sti.c > +++ b/drivers/clocksource/em_sti.c > @@ -110,9 +110,9 @@ static void em_sti_disable(struct em_sti > clk_disable_unprepare(p->clk); > } > > -static cycle_t em_sti_count(struct em_sti_priv *p) > +static u64 em_sti_count(struct em_sti_priv *p) > { > - cycle_t ticks; > + u64 ticks; > unsigned long flags; > > /* the STI hardware buffers the 48-bit count, but to > @@ -121,14 +121,14 @@ static cycle_t em_sti_count(struct em_st > * Always read STI_COUNT_H before STI_COUNT_L. > */ > raw_spin_lock_irqsave(&p->lock, flags); > - ticks = (cycle_t)(em_sti_read(p, STI_COUNT_H) & 0xffff) << 32; > + ticks = (u64)(em_sti_read(p, STI_COUNT_H) & 0xffff) << 32; > ticks |= em_sti_read(p, STI_COUNT_L); > raw_spin_unlock_irqrestore(&p->lock, flags); > > return ticks; > } > > -static cycle_t em_sti_set_next(struct em_sti_priv *p, cycle_t next) > +static u64 em_sti_set_next(struct em_sti_priv *p, u64 next) > { > unsigned long flags; > > @@ -198,7 +198,7 @@ static struct em_sti_priv *cs_to_em_sti( > return container_of(cs, struct em_sti_priv, cs); > } > > -static cycle_t em_sti_clocksource_read(struct clocksource *cs) > +static u64 em_sti_clocksource_read(struct clocksource *cs) > { > return em_sti_count(cs_to_em_sti(cs)); > } > @@ -271,7 +271,7 @@ static int em_sti_clock_event_next(unsig > struct clock_event_device *ced) > { > struct em_sti_priv *p = ced_to_em_sti(ced); > - cycle_t next; > + u64 next; > int safe; > > next = em_sti_set_next(p, em_sti_count(p) + delta); > --- a/drivers/clocksource/exynos_mct.c > +++ b/drivers/clocksource/exynos_mct.c > @@ -183,7 +183,7 @@ static u64 exynos4_read_count_64(void) > hi2 = readl_relaxed(reg_base + EXYNOS4_MCT_G_CNT_U); > } while (hi != hi2); > > - return ((cycle_t)hi << 32) | lo; > + return ((u64)hi << 32) | lo; > } > > /** > @@ -199,7 +199,7 @@ static u32 notrace exynos4_read_count_32 > return readl_relaxed(reg_base + EXYNOS4_MCT_G_CNT_L); > } > > -static cycle_t exynos4_frc_read(struct clocksource *cs) > +static u64 exynos4_frc_read(struct clocksource *cs) > { > return exynos4_read_count_32(); > } > @@ -266,7 +266,7 @@ static void exynos4_mct_comp0_stop(void) > static void exynos4_mct_comp0_start(bool periodic, unsigned long cycles) > { > unsigned int tcon; > - cycle_t comp_cycle; > + u64 comp_cycle; > > tcon = readl_relaxed(reg_base + EXYNOS4_MCT_G_TCON); > > --- a/drivers/clocksource/h8300_timer16.c > +++ b/drivers/clocksource/h8300_timer16.c > @@ -72,7 +72,7 @@ static inline struct timer16_priv *cs_to > return container_of(cs, struct timer16_priv, cs); > } > > -static cycle_t timer16_clocksource_read(struct clocksource *cs) > +static u64 timer16_clocksource_read(struct clocksource *cs) > { > struct timer16_priv *p = cs_to_priv(cs); > unsigned long raw, value; > --- a/drivers/clocksource/h8300_tpu.c > +++ b/drivers/clocksource/h8300_tpu.c > @@ -64,7 +64,7 @@ static inline struct tpu_priv *cs_to_pri > return container_of(cs, struct tpu_priv, cs); > } > > -static cycle_t tpu_clocksource_read(struct clocksource *cs) > +static u64 tpu_clocksource_read(struct clocksource *cs) > { > struct tpu_priv *p = cs_to_priv(cs); > unsigned long flags; > --- a/drivers/clocksource/i8253.c > +++ b/drivers/clocksource/i8253.c > @@ -25,7 +25,7 @@ EXPORT_SYMBOL(i8253_lock); > * to just read by itself. So use jiffies to emulate a free > * running counter: > */ > -static cycle_t i8253_read(struct clocksource *cs) > +static u64 i8253_read(struct clocksource *cs) > { > static int old_count; > static u32 old_jifs; > @@ -83,7 +83,7 @@ static cycle_t i8253_read(struct clockso > > count = (PIT_LATCH - 1) - count; > > - return (cycle_t)(jifs * PIT_LATCH) + count; > + return (u64)(jifs * PIT_LATCH) + count; > } > > static struct clocksource i8253_cs = { > --- a/drivers/clocksource/jcore-pit.c > +++ b/drivers/clocksource/jcore-pit.c > @@ -57,7 +57,7 @@ static notrace u64 jcore_sched_clock_rea > return seclo * NSEC_PER_SEC + nsec; > } > > -static cycle_t jcore_clocksource_read(struct clocksource *cs) > +static u64 jcore_clocksource_read(struct clocksource *cs) > { > return jcore_sched_clock_read(); > } > --- a/drivers/clocksource/metag_generic.c > +++ b/drivers/clocksource/metag_generic.c > @@ -56,7 +56,7 @@ static int metag_timer_set_next_event(un > return 0; > } > > -static cycle_t metag_clocksource_read(struct clocksource *cs) > +static u64 metag_clocksource_read(struct clocksource *cs) > { > return __core_reg_get(TXTIMER); > } > --- a/drivers/clocksource/mips-gic-timer.c > +++ b/drivers/clocksource/mips-gic-timer.c > @@ -125,7 +125,7 @@ static int gic_clockevent_init(void) > return 0; > } > > -static cycle_t gic_hpt_read(struct clocksource *cs) > +static u64 gic_hpt_read(struct clocksource *cs) > { > return gic_read_count(); > } > --- a/drivers/clocksource/mmio.c > +++ b/drivers/clocksource/mmio.c > @@ -20,24 +20,24 @@ static inline struct clocksource_mmio *t > return container_of(c, struct clocksource_mmio, clksrc); > } > > -cycle_t clocksource_mmio_readl_up(struct clocksource *c) > +u64 clocksource_mmio_readl_up(struct clocksource *c) > { > - return (cycle_t)readl_relaxed(to_mmio_clksrc(c)->reg); > + return (u64)readl_relaxed(to_mmio_clksrc(c)->reg); > } > > -cycle_t clocksource_mmio_readl_down(struct clocksource *c) > +u64 clocksource_mmio_readl_down(struct clocksource *c) > { > - return ~(cycle_t)readl_relaxed(to_mmio_clksrc(c)->reg) & c->mask; > + return ~(u64)readl_relaxed(to_mmio_clksrc(c)->reg) & c->mask; > } > > -cycle_t clocksource_mmio_readw_up(struct clocksource *c) > +u64 clocksource_mmio_readw_up(struct clocksource *c) > { > - return (cycle_t)readw_relaxed(to_mmio_clksrc(c)->reg); > + return (u64)readw_relaxed(to_mmio_clksrc(c)->reg); > } > > -cycle_t clocksource_mmio_readw_down(struct clocksource *c) > +u64 clocksource_mmio_readw_down(struct clocksource *c) > { > - return ~(cycle_t)readw_relaxed(to_mmio_clksrc(c)->reg) & c->mask; > + return ~(u64)readw_relaxed(to_mmio_clksrc(c)->reg) & c->mask; > } > > /** > @@ -51,7 +51,7 @@ cycle_t clocksource_mmio_readw_down(stru > */ > int __init clocksource_mmio_init(void __iomem *base, const char *name, > unsigned long hz, int rating, unsigned bits, > - cycle_t (*read)(struct clocksource *)) > + u64 (*read)(struct clocksource *)) > { > struct clocksource_mmio *cs; > > --- a/drivers/clocksource/mxs_timer.c > +++ b/drivers/clocksource/mxs_timer.c > @@ -97,7 +97,7 @@ static void timrot_irq_acknowledge(void) > HW_TIMROT_TIMCTRLn(0) + STMP_OFFSET_REG_CLR); > } > > -static cycle_t timrotv1_get_cycles(struct clocksource *cs) > +static u64 timrotv1_get_cycles(struct clocksource *cs) > { > return ~((__raw_readl(mxs_timrot_base + HW_TIMROT_TIMCOUNTn(1)) > & 0xffff0000) >> 16); > --- a/drivers/clocksource/qcom-timer.c > +++ b/drivers/clocksource/qcom-timer.c > @@ -89,7 +89,7 @@ static struct clock_event_device __percp > > static void __iomem *source_base; > > -static notrace cycle_t msm_read_timer_count(struct clocksource *cs) > +static notrace u64 msm_read_timer_count(struct clocksource *cs) > { > return readl_relaxed(source_base + TIMER_COUNT_VAL); > } > --- a/drivers/clocksource/samsung_pwm_timer.c > +++ b/drivers/clocksource/samsung_pwm_timer.c > @@ -307,7 +307,7 @@ static void samsung_clocksource_resume(s > samsung_time_start(pwm.source_id, true); > } > > -static cycle_t notrace samsung_clocksource_read(struct clocksource *c) > +static u64 notrace samsung_clocksource_read(struct clocksource *c) > { > return ~readl_relaxed(pwm.source_reg); > } > --- a/drivers/clocksource/scx200_hrt.c > +++ b/drivers/clocksource/scx200_hrt.c > @@ -43,10 +43,10 @@ MODULE_PARM_DESC(ppm, "+-adjust to actua > /* The base timer frequency, * 27 if selected */ > #define HRT_FREQ 1000000 > > -static cycle_t read_hrt(struct clocksource *cs) > +static u64 read_hrt(struct clocksource *cs) > { > /* Read the timer value */ > - return (cycle_t) inl(scx200_cb_base + SCx200_TIMER_OFFSET); > + return (u64) inl(scx200_cb_base + SCx200_TIMER_OFFSET); > } > > static struct clocksource cs_hrt = { > --- a/drivers/clocksource/sh_cmt.c > +++ b/drivers/clocksource/sh_cmt.c > @@ -612,7 +612,7 @@ static struct sh_cmt_channel *cs_to_sh_c > return container_of(cs, struct sh_cmt_channel, cs); > } > > -static cycle_t sh_cmt_clocksource_read(struct clocksource *cs) > +static u64 sh_cmt_clocksource_read(struct clocksource *cs) > { > struct sh_cmt_channel *ch = cs_to_sh_cmt(cs); > unsigned long flags, raw; > --- a/drivers/clocksource/sh_tmu.c > +++ b/drivers/clocksource/sh_tmu.c > @@ -255,7 +255,7 @@ static struct sh_tmu_channel *cs_to_sh_t > return container_of(cs, struct sh_tmu_channel, cs); > } > > -static cycle_t sh_tmu_clocksource_read(struct clocksource *cs) > +static u64 sh_tmu_clocksource_read(struct clocksource *cs) > { > struct sh_tmu_channel *ch = cs_to_sh_tmu(cs); > > --- a/drivers/clocksource/tcb_clksrc.c > +++ b/drivers/clocksource/tcb_clksrc.c > @@ -41,7 +41,7 @@ > > static void __iomem *tcaddr; > > -static cycle_t tc_get_cycles(struct clocksource *cs) > +static u64 tc_get_cycles(struct clocksource *cs) > { > unsigned long flags; > u32 lower, upper; > @@ -56,7 +56,7 @@ static cycle_t tc_get_cycles(struct cloc > return (upper << 16) | lower; > } > > -static cycle_t tc_get_cycles32(struct clocksource *cs) > +static u64 tc_get_cycles32(struct clocksource *cs) > { > return __raw_readl(tcaddr + ATMEL_TC_REG(0, CV)); > } > --- a/drivers/clocksource/time-pistachio.c > +++ b/drivers/clocksource/time-pistachio.c > @@ -67,7 +67,7 @@ static inline void gpt_writel(void __iom > writel(value, base + 0x20 * gpt_id + offset); > } > > -static cycle_t notrace > +static u64 notrace > pistachio_clocksource_read_cycles(struct clocksource *cs) > { > struct pistachio_clocksource *pcs = to_pistachio_clocksource(cs); > @@ -84,7 +84,7 @@ pistachio_clocksource_read_cycles(struct > counter = gpt_readl(pcs->base, TIMER_CURRENT_VALUE, 0); > raw_spin_unlock_irqrestore(&pcs->lock, flags); > > - return (cycle_t)~counter; > + return (u64)~counter; > } > > static u64 notrace pistachio_read_sched_clock(void) > --- a/drivers/clocksource/timer-atlas7.c > +++ b/drivers/clocksource/timer-atlas7.c > @@ -85,7 +85,7 @@ static irqreturn_t sirfsoc_timer_interru > } > > /* read 64-bit timer counter */ > -static cycle_t sirfsoc_timer_read(struct clocksource *cs) > +static u64 sirfsoc_timer_read(struct clocksource *cs) > { > u64 cycles; > > --- a/drivers/clocksource/timer-atmel-pit.c > +++ b/drivers/clocksource/timer-atmel-pit.c > @@ -73,7 +73,7 @@ static inline void pit_write(void __iome > * Clocksource: just a monotonic counter of MCK/16 cycles. > * We don't care whether or not PIT irqs are enabled. > */ > -static cycle_t read_pit_clk(struct clocksource *cs) > +static u64 read_pit_clk(struct clocksource *cs) > { > struct pit_data *data = clksrc_to_pit_data(cs); > unsigned long flags; > --- a/drivers/clocksource/timer-atmel-st.c > +++ b/drivers/clocksource/timer-atmel-st.c > @@ -92,7 +92,7 @@ static irqreturn_t at91rm9200_timer_inte > return IRQ_NONE; > } > > -static cycle_t read_clk32k(struct clocksource *cs) > +static u64 read_clk32k(struct clocksource *cs) > { > return read_CRTR(); > } > --- a/drivers/clocksource/timer-nps.c > +++ b/drivers/clocksource/timer-nps.c > @@ -48,11 +48,11 @@ static void *nps_msu_reg_low_addr[NPS_CL > > static unsigned long nps_timer_rate; > > -static cycle_t nps_clksrc_read(struct clocksource *clksrc) > +static u64 nps_clksrc_read(struct clocksource *clksrc) > { > int cluster = raw_smp_processor_id() >> NPS_CLUSTER_OFFSET; > > - return (cycle_t)ioread32be(nps_msu_reg_low_addr[cluster]); > + return (u64)ioread32be(nps_msu_reg_low_addr[cluster]); > } > > static int __init nps_setup_clocksource(struct device_node *node, > --- a/drivers/clocksource/timer-prima2.c > +++ b/drivers/clocksource/timer-prima2.c > @@ -72,7 +72,7 @@ static irqreturn_t sirfsoc_timer_interru > } > > /* read 64-bit timer counter */ > -static cycle_t notrace sirfsoc_timer_read(struct clocksource *cs) > +static u64 notrace sirfsoc_timer_read(struct clocksource *cs) > { > u64 cycles; > > --- a/drivers/clocksource/timer-sun5i.c > +++ b/drivers/clocksource/timer-sun5i.c > @@ -152,7 +152,7 @@ static irqreturn_t sun5i_timer_interrupt > return IRQ_HANDLED; > } > > -static cycle_t sun5i_clksrc_read(struct clocksource *clksrc) > +static u64 sun5i_clksrc_read(struct clocksource *clksrc) > { > struct sun5i_timer_clksrc *cs = to_sun5i_timer_clksrc(clksrc); > > --- a/drivers/clocksource/timer-ti-32k.c > +++ b/drivers/clocksource/timer-ti-32k.c > @@ -65,11 +65,11 @@ static inline struct ti_32k *to_ti_32k(s > return container_of(cs, struct ti_32k, cs); > } > > -static cycle_t notrace ti_32k_read_cycles(struct clocksource *cs) > +static u64 notrace ti_32k_read_cycles(struct clocksource *cs) > { > struct ti_32k *ti = to_ti_32k(cs); > > - return (cycle_t)readl_relaxed(ti->counter); > + return (u64)readl_relaxed(ti->counter); > } > > static struct ti_32k ti_32k_timer = { > --- a/drivers/clocksource/vt8500_timer.c > +++ b/drivers/clocksource/vt8500_timer.c > @@ -53,7 +53,7 @@ > > static void __iomem *regbase; > > -static cycle_t vt8500_timer_read(struct clocksource *cs) > +static u64 vt8500_timer_read(struct clocksource *cs) > { > int loops = msecs_to_loops(10); > writel(3, regbase + TIMER_CTRL_VAL); > @@ -75,7 +75,7 @@ static int vt8500_timer_set_next_event(u > struct clock_event_device *evt) > { > int loops = msecs_to_loops(10); > - cycle_t alarm = clocksource.read(&clocksource) + cycles; > + u64 alarm = clocksource.read(&clocksource) + cycles; > while ((readl(regbase + TIMER_AS_VAL) & TIMER_MATCH_W_ACTIVE) > && --loops) > cpu_relax(); > --- a/drivers/hv/hv.c > +++ b/drivers/hv/hv.c > @@ -135,9 +135,9 @@ u64 hv_do_hypercall(u64 control, void *i > EXPORT_SYMBOL_GPL(hv_do_hypercall); > > #ifdef CONFIG_X86_64 > -static cycle_t read_hv_clock_tsc(struct clocksource *arg) > +static u64 read_hv_clock_tsc(struct clocksource *arg) > { > - cycle_t current_tick; > + u64 current_tick; > struct ms_hyperv_tsc_page *tsc_pg = hv_context.tsc_page; > > if (tsc_pg->tsc_sequence != 0) { > @@ -146,7 +146,7 @@ static cycle_t read_hv_clock_tsc(struct > */ > > while (1) { > - cycle_t tmp; > + u64 tmp; > u32 sequence = tsc_pg->tsc_sequence; > u64 cur_tsc; > u64 scale = tsc_pg->tsc_scale; > @@ -350,7 +350,7 @@ int hv_post_message(union hv_connection_ > static int hv_ce_set_next_event(unsigned long delta, > struct clock_event_device *evt) > { > - cycle_t current_tick; > + u64 current_tick; > > WARN_ON(!clockevent_state_oneshot(evt)); > > --- a/drivers/irqchip/irq-mips-gic.c > +++ b/drivers/irqchip/irq-mips-gic.c > @@ -152,12 +152,12 @@ static inline void gic_map_to_vpe(unsign > } > > #ifdef CONFIG_CLKSRC_MIPS_GIC > -cycle_t gic_read_count(void) > +u64 gic_read_count(void) > { > unsigned int hi, hi2, lo; > > if (mips_cm_is64) > - return (cycle_t)gic_read(GIC_REG(SHARED, GIC_SH_COUNTER)); > + return (u64)gic_read(GIC_REG(SHARED, GIC_SH_COUNTER)); > > do { > hi = gic_read32(GIC_REG(SHARED, GIC_SH_COUNTER_63_32)); > @@ -165,7 +165,7 @@ cycle_t gic_read_count(void) > hi2 = gic_read32(GIC_REG(SHARED, GIC_SH_COUNTER_63_32)); > } while (hi2 != hi); > > - return (((cycle_t) hi) << 32) + lo; > + return (((u64) hi) << 32) + lo; > } > > unsigned int gic_get_count_width(void) > @@ -179,7 +179,7 @@ unsigned int gic_get_count_width(void) > return bits; > } > > -void gic_write_compare(cycle_t cnt) > +void gic_write_compare(u64 cnt) > { > if (mips_cm_is64) { > gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_COMPARE), cnt); > @@ -191,7 +191,7 @@ void gic_write_compare(cycle_t cnt) > } > } > > -void gic_write_cpu_compare(cycle_t cnt, int cpu) > +void gic_write_cpu_compare(u64 cnt, int cpu) > { > unsigned long flags; > > @@ -211,17 +211,17 @@ void gic_write_cpu_compare(cycle_t cnt, > local_irq_restore(flags); > } > > -cycle_t gic_read_compare(void) > +u64 gic_read_compare(void) > { > unsigned int hi, lo; > > if (mips_cm_is64) > - return (cycle_t)gic_read(GIC_REG(VPE_LOCAL, GIC_VPE_COMPARE)); > + return (u64)gic_read(GIC_REG(VPE_LOCAL, GIC_VPE_COMPARE)); > > hi = gic_read32(GIC_REG(VPE_LOCAL, GIC_VPE_COMPARE_HI)); > lo = gic_read32(GIC_REG(VPE_LOCAL, GIC_VPE_COMPARE_LO)); > > - return (((cycle_t) hi) << 32) + lo; > + return (((u64) hi) << 32) + lo; > } > > void gic_start_count(void) > --- a/drivers/net/ethernet/amd/xgbe/xgbe-ptp.c > +++ b/drivers/net/ethernet/amd/xgbe/xgbe-ptp.c > @@ -122,7 +122,7 @@ > #include "xgbe.h" > #include "xgbe-common.h" > > -static cycle_t xgbe_cc_read(const struct cyclecounter *cc) > +static u64 xgbe_cc_read(const struct cyclecounter *cc) > { > struct xgbe_prv_data *pdata = container_of(cc, > struct xgbe_prv_data, > --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c > +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c > @@ -15219,7 +15219,7 @@ void bnx2x_set_rx_ts(struct bnx2x *bp, s > } > > /* Read the PHC */ > -static cycle_t bnx2x_cyclecounter_read(const struct cyclecounter *cc) > +static u64 bnx2x_cyclecounter_read(const struct cyclecounter *cc) > { > struct bnx2x *bp = container_of(cc, struct bnx2x, cyclecounter); > int port = BP_PORT(bp); > --- a/drivers/net/ethernet/freescale/fec_ptp.c > +++ b/drivers/net/ethernet/freescale/fec_ptp.c > @@ -230,7 +230,7 @@ static int fec_ptp_enable_pps(struct fec > * cyclecounter structure used to construct a ns counter from the > * arbitrary fixed point registers > */ > -static cycle_t fec_ptp_read(const struct cyclecounter *cc) > +static u64 fec_ptp_read(const struct cyclecounter *cc) > { > struct fec_enet_private *fep = > container_of(cc, struct fec_enet_private, cc); > --- a/drivers/net/ethernet/intel/e1000e/netdev.c > +++ b/drivers/net/ethernet/intel/e1000e/netdev.c > @@ -4305,24 +4305,24 @@ void e1000e_reinit_locked(struct e1000_a > /** > * e1000e_sanitize_systim - sanitize raw cycle counter reads > * @hw: pointer to the HW structure > - * @systim: cycle_t value read, sanitized and returned > + * @systim: u64 timestamp value read, sanitized and returned > * > * Errata for 82574/82583 possible bad bits read from SYSTIMH/L: > * check to see that the time is incrementing at a reasonable > * rate and is a multiple of incvalue. > **/ > -static cycle_t e1000e_sanitize_systim(struct e1000_hw *hw, cycle_t systim) > +static u64 e1000e_sanitize_systim(struct e1000_hw *hw, u64 systim) > { > u64 time_delta, rem, temp; > - cycle_t systim_next; > + u64 systim_next; > u32 incvalue; > int i; > > incvalue = er32(TIMINCA) & E1000_TIMINCA_INCVALUE_MASK; > for (i = 0; i < E1000_MAX_82574_SYSTIM_REREADS; i++) { > /* latch SYSTIMH on read of SYSTIML */ > - systim_next = (cycle_t)er32(SYSTIML); > - systim_next |= (cycle_t)er32(SYSTIMH) << 32; > + systim_next = (u64)er32(SYSTIML); > + systim_next |= (u64)er32(SYSTIMH) << 32; > > time_delta = systim_next - systim; > temp = time_delta; > @@ -4342,13 +4342,13 @@ static cycle_t e1000e_sanitize_systim(st > * e1000e_cyclecounter_read - read raw cycle counter (used by time counter) > * @cc: cyclecounter structure > **/ > -static cycle_t e1000e_cyclecounter_read(const struct cyclecounter *cc) > +static u64 e1000e_cyclecounter_read(const struct cyclecounter *cc) > { > struct e1000_adapter *adapter = container_of(cc, struct e1000_adapter, > cc); > struct e1000_hw *hw = &adapter->hw; > u32 systimel, systimeh; > - cycle_t systim; > + u64 systim; > /* SYSTIMH latching upon SYSTIML read does not work well. > * This means that if SYSTIML overflows after we read it but before > * we read SYSTIMH, the value of SYSTIMH has been incremented and we > @@ -4368,8 +4368,8 @@ static cycle_t e1000e_cyclecounter_read( > systimel = systimel_2; > } > } > - systim = (cycle_t)systimel; > - systim |= (cycle_t)systimeh << 32; > + systim = (u64)systimel; > + systim |= (u64)systimeh << 32; > > if (adapter->flags2 & FLAG2_CHECK_SYSTIM_OVERFLOW) > systim = e1000e_sanitize_systim(hw, systim); > --- a/drivers/net/ethernet/intel/e1000e/ptp.c > +++ b/drivers/net/ethernet/intel/e1000e/ptp.c > @@ -127,8 +127,8 @@ static int e1000e_phc_get_syncdevicetime > unsigned long flags; > int i; > u32 tsync_ctrl; > - cycle_t dev_cycles; > - cycle_t sys_cycles; > + u64 dev_cycles; > + u64 sys_cycles; > > tsync_ctrl = er32(TSYNCTXCTL); > tsync_ctrl |= E1000_TSYNCTXCTL_START_SYNC | > --- a/drivers/net/ethernet/intel/igb/igb_ptp.c > +++ b/drivers/net/ethernet/intel/igb/igb_ptp.c > @@ -77,7 +77,7 @@ > static void igb_ptp_tx_hwtstamp(struct igb_adapter *adapter); > > /* SYSTIM read access for the 82576 */ > -static cycle_t igb_ptp_read_82576(const struct cyclecounter *cc) > +static u64 igb_ptp_read_82576(const struct cyclecounter *cc) > { > struct igb_adapter *igb = container_of(cc, struct igb_adapter, cc); > struct e1000_hw *hw = &igb->hw; > @@ -94,7 +94,7 @@ static cycle_t igb_ptp_read_82576(const > } > > /* SYSTIM read access for the 82580 */ > -static cycle_t igb_ptp_read_82580(const struct cyclecounter *cc) > +static u64 igb_ptp_read_82580(const struct cyclecounter *cc) > { > struct igb_adapter *igb = container_of(cc, struct igb_adapter, cc); > struct e1000_hw *hw = &igb->hw; > --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c > +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c > @@ -245,7 +245,7 @@ static void ixgbe_ptp_setup_sdp_x540(str > * result of SYSTIME is 32bits of "billions of cycles" and 32 bits of > * "cycles", rather than seconds and nanoseconds. > */ > -static cycle_t ixgbe_ptp_read_X550(const struct cyclecounter *hw_cc) > +static u64 ixgbe_ptp_read_X550(const struct cyclecounter *hw_cc) > { > struct ixgbe_adapter *adapter = > container_of(hw_cc, struct ixgbe_adapter, hw_cc); > @@ -282,7 +282,7 @@ static cycle_t ixgbe_ptp_read_X550(const > * cyclecounter structure used to construct a ns counter from the > * arbitrary fixed point registers > */ > -static cycle_t ixgbe_ptp_read_82599(const struct cyclecounter *cc) > +static u64 ixgbe_ptp_read_82599(const struct cyclecounter *cc) > { > struct ixgbe_adapter *adapter = > container_of(cc, struct ixgbe_adapter, hw_cc); > --- a/drivers/net/ethernet/mellanox/mlx4/en_clock.c > +++ b/drivers/net/ethernet/mellanox/mlx4/en_clock.c > @@ -38,7 +38,7 @@ > > /* mlx4_en_read_clock - read raw cycle counter (to be used by time counter) > */ > -static cycle_t mlx4_en_read_clock(const struct cyclecounter *tc) > +static u64 mlx4_en_read_clock(const struct cyclecounter *tc) > { > struct mlx4_en_dev *mdev = > container_of(tc, struct mlx4_en_dev, cycles); > --- a/drivers/net/ethernet/mellanox/mlx4/main.c > +++ b/drivers/net/ethernet/mellanox/mlx4/main.c > @@ -1823,10 +1823,10 @@ static void unmap_bf_area(struct mlx4_de > io_mapping_free(mlx4_priv(dev)->bf_mapping); > } > > -cycle_t mlx4_read_clock(struct mlx4_dev *dev) > +u64 mlx4_read_clock(struct mlx4_dev *dev) > { > u32 clockhi, clocklo, clockhi1; > - cycle_t cycles; > + u64 cycles; > int i; > struct mlx4_priv *priv = mlx4_priv(dev); > > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_clock.c > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_clock.c > @@ -49,7 +49,7 @@ void mlx5e_fill_hwstamp(struct mlx5e_tst > hwts->hwtstamp = ns_to_ktime(nsec); > } > > -static cycle_t mlx5e_read_internal_timer(const struct cyclecounter *cc) > +static u64 mlx5e_read_internal_timer(const struct cyclecounter *cc) > { > struct mlx5e_tstamp *tstamp = container_of(cc, struct mlx5e_tstamp, > cycles); > --- a/drivers/net/ethernet/mellanox/mlx5/core/main.c > +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c > @@ -522,7 +522,7 @@ int mlx5_core_disable_hca(struct mlx5_co > return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out)); > } > > -cycle_t mlx5_read_internal_timer(struct mlx5_core_dev *dev) > +u64 mlx5_read_internal_timer(struct mlx5_core_dev *dev) > { > u32 timer_h, timer_h1, timer_l; > > @@ -532,7 +532,7 @@ cycle_t mlx5_read_internal_timer(struct > if (timer_h != timer_h1) /* wrap around */ > timer_l = ioread32be(&dev->iseg->internal_timer_l); > > - return (cycle_t)timer_l | (cycle_t)timer_h1 << 32; > + return (u64)timer_l | (u64)timer_h1 << 32; > } > > static int mlx5_irq_set_affinity_hint(struct mlx5_core_dev *mdev, int i) > --- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h > +++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h > @@ -93,7 +93,7 @@ bool mlx5_sriov_is_enabled(struct mlx5_c > int mlx5_core_enable_hca(struct mlx5_core_dev *dev, u16 func_id); > int mlx5_core_disable_hca(struct mlx5_core_dev *dev, u16 func_id); > int mlx5_wait_for_vf_pages(struct mlx5_core_dev *dev); > -cycle_t mlx5_read_internal_timer(struct mlx5_core_dev *dev); > +u64 mlx5_read_internal_timer(struct mlx5_core_dev *dev); > u32 mlx5_get_msix_vec(struct mlx5_core_dev *dev, int vecidx); > struct mlx5_eq *mlx5_eqn2eq(struct mlx5_core_dev *dev, int eqn); > void mlx5_cq_tasklet_cb(unsigned long data); > --- a/drivers/net/ethernet/ti/cpts.c > +++ b/drivers/net/ethernet/ti/cpts.c > @@ -101,7 +101,7 @@ static int cpts_fifo_read(struct cpts *c > return type == match ? 0 : -1; > } > > -static cycle_t cpts_systim_read(const struct cyclecounter *cc) > +static u64 cpts_systim_read(const struct cyclecounter *cc) > { > u64 val = 0; > struct cpts_event *event; > --- a/include/kvm/arm_arch_timer.h > +++ b/include/kvm/arm_arch_timer.h > @@ -25,13 +25,13 @@ > > struct arch_timer_kvm { > /* Virtual offset */ > - cycle_t cntvoff; > + u64 cntvoff; > }; > > struct arch_timer_cpu { > /* Registers: control register, timer value */ > u32 cntv_ctl; /* Saved/restored */ > - cycle_t cntv_cval; /* Saved/restored */ > + u64 cntv_cval; /* Saved/restored */ > > /* > * Anything that is not used directly from assembly code goes > --- a/include/linux/clocksource.h > +++ b/include/linux/clocksource.h > @@ -75,8 +75,8 @@ struct module; > * structure. > */ > struct clocksource { > - cycle_t (*read)(struct clocksource *cs); > - cycle_t mask; > + u64 (*read)(struct clocksource *cs); > + u64 mask; > u32 mult; > u32 shift; > u64 max_idle_ns; > @@ -98,8 +98,8 @@ struct clocksource { > #ifdef CONFIG_CLOCKSOURCE_WATCHDOG > /* Watchdog related data, used by the framework */ > struct list_head wd_list; > - cycle_t cs_last; > - cycle_t wd_last; > + u64 cs_last; > + u64 wd_last; > #endif > struct module *owner; > }; > @@ -117,7 +117,7 @@ struct clocksource { > #define CLOCK_SOURCE_RESELECT 0x100 > > /* simplify initialization of mask field */ > -#define CLOCKSOURCE_MASK(bits) (cycle_t)((bits) < 64 ? ((1ULL<<(bits))-1) : -1) > +#define CLOCKSOURCE_MASK(bits) (u64)((bits) < 64 ? ((1ULL<<(bits))-1) : -1) > > static inline u32 clocksource_freq2mult(u32 freq, u32 shift_constant, u64 from) > { > @@ -173,7 +173,7 @@ static inline u32 clocksource_hz2mult(u3 > * > * XXX - This could use some mult_lxl_ll() asm optimization > */ > -static inline s64 clocksource_cyc2ns(cycle_t cycles, u32 mult, u32 shift) > +static inline s64 clocksource_cyc2ns(u64 cycles, u32 mult, u32 shift) > { > return ((u64) cycles * mult) >> shift; > } > @@ -233,13 +233,13 @@ static inline void __clocksource_update_ > > extern int timekeeping_notify(struct clocksource *clock); > > -extern cycle_t clocksource_mmio_readl_up(struct clocksource *); > -extern cycle_t clocksource_mmio_readl_down(struct clocksource *); > -extern cycle_t clocksource_mmio_readw_up(struct clocksource *); > -extern cycle_t clocksource_mmio_readw_down(struct clocksource *); > +extern u64 clocksource_mmio_readl_up(struct clocksource *); > +extern u64 clocksource_mmio_readl_down(struct clocksource *); > +extern u64 clocksource_mmio_readw_up(struct clocksource *); > +extern u64 clocksource_mmio_readw_down(struct clocksource *); > > extern int clocksource_mmio_init(void __iomem *, const char *, > - unsigned long, int, unsigned, cycle_t (*)(struct clocksource *)); > + unsigned long, int, unsigned, u64 (*)(struct clocksource *)); > > extern int clocksource_i8253_init(void); > > --- a/include/linux/dw_apb_timer.h > +++ b/include/linux/dw_apb_timer.h > @@ -50,6 +50,6 @@ dw_apb_clocksource_init(unsigned rating, > unsigned long freq); > void dw_apb_clocksource_register(struct dw_apb_clocksource *dw_cs); > void dw_apb_clocksource_start(struct dw_apb_clocksource *dw_cs); > -cycle_t dw_apb_clocksource_read(struct dw_apb_clocksource *dw_cs); > +u64 dw_apb_clocksource_read(struct dw_apb_clocksource *dw_cs); > > #endif /* __DW_APB_TIMER_H__ */ > --- a/include/linux/irqchip/mips-gic.h > +++ b/include/linux/irqchip/mips-gic.h > @@ -259,11 +259,11 @@ extern void gic_init(unsigned long gic_b > unsigned long gic_addrspace_size, unsigned int cpu_vec, > unsigned int irqbase); > extern void gic_clocksource_init(unsigned int); > -extern cycle_t gic_read_count(void); > +extern u64 gic_read_count(void); > extern unsigned int gic_get_count_width(void); > -extern cycle_t gic_read_compare(void); > -extern void gic_write_compare(cycle_t cnt); > -extern void gic_write_cpu_compare(cycle_t cnt, int cpu); > +extern u64 gic_read_compare(void); > +extern void gic_write_compare(u64 cnt); > +extern void gic_write_cpu_compare(u64 cnt, int cpu); > extern void gic_start_count(void); > extern void gic_stop_count(void); > extern int gic_get_c0_compare_int(void); > --- a/include/linux/mlx4/device.h > +++ b/include/linux/mlx4/device.h > @@ -1461,7 +1461,7 @@ int mlx4_get_roce_gid_from_slave(struct > int mlx4_FLOW_STEERING_IB_UC_QP_RANGE(struct mlx4_dev *dev, u32 min_range_qpn, > u32 max_range_qpn); > > -cycle_t mlx4_read_clock(struct mlx4_dev *dev); > +u64 mlx4_read_clock(struct mlx4_dev *dev); > > struct mlx4_active_ports { > DECLARE_BITMAP(ports, MLX4_MAX_PORTS); > --- a/include/linux/timecounter.h > +++ b/include/linux/timecounter.h > @@ -20,7 +20,7 @@ > #include <linux/types.h> > > /* simplify initialization of mask field */ > -#define CYCLECOUNTER_MASK(bits) (cycle_t)((bits) < 64 ? ((1ULL<<(bits))-1) : -1) > +#define CYCLECOUNTER_MASK(bits) (u64)((bits) < 64 ? ((1ULL<<(bits))-1) : -1) > > /** > * struct cyclecounter - hardware abstraction for a free running counter > @@ -37,8 +37,8 @@ > * @shift: cycle to nanosecond divisor (power of two) > */ > struct cyclecounter { > - cycle_t (*read)(const struct cyclecounter *cc); > - cycle_t mask; > + u64 (*read)(const struct cyclecounter *cc); > + u64 mask; > u32 mult; > u32 shift; > }; > @@ -63,7 +63,7 @@ struct cyclecounter { > */ > struct timecounter { > const struct cyclecounter *cc; > - cycle_t cycle_last; > + u64 cycle_last; > u64 nsec; > u64 mask; > u64 frac; > @@ -77,7 +77,7 @@ struct timecounter { > * @frac: pointer to storage for the fractional nanoseconds. > */ > static inline u64 cyclecounter_cyc2ns(const struct cyclecounter *cc, > - cycle_t cycles, u64 mask, u64 *frac) > + u64 cycles, u64 mask, u64 *frac) > { > u64 ns = (u64) cycles; > > @@ -134,6 +134,6 @@ extern u64 timecounter_read(struct timec > * in the past. > */ > extern u64 timecounter_cyc2time(struct timecounter *tc, > - cycle_t cycle_tstamp); > + u64 cycle_tstamp); > > #endif > --- a/include/linux/timekeeper_internal.h > +++ b/include/linux/timekeeper_internal.h > @@ -29,9 +29,9 @@ > */ > struct tk_read_base { > struct clocksource *clock; > - cycle_t (*read)(struct clocksource *cs); > - cycle_t mask; > - cycle_t cycle_last; > + u64 (*read)(struct clocksource *cs); > + u64 mask; > + u64 cycle_last; > u32 mult; > u32 shift; > u64 xtime_nsec; > @@ -97,7 +97,7 @@ struct timekeeper { > struct timespec64 raw_time; > > /* The following members are for timekeeping internal use */ > - cycle_t cycle_interval; > + u64 cycle_interval; > u64 xtime_interval; > s64 xtime_remainder; > u32 raw_interval; > @@ -136,7 +136,7 @@ extern void update_vsyscall_tz(void); > > extern void update_vsyscall_old(struct timespec *ts, struct timespec *wtm, > struct clocksource *c, u32 mult, > - cycle_t cycle_last); > + u64 cycle_last); > extern void update_vsyscall_tz(void); > > #else > --- a/include/linux/timekeeping.h > +++ b/include/linux/timekeeping.h > @@ -292,7 +292,7 @@ extern void ktime_get_raw_and_real_ts64( > * @cs_was_changed_seq: The sequence number of clocksource change events > */ > struct system_time_snapshot { > - cycle_t cycles; > + u64 cycles; > ktime_t real; > ktime_t raw; > unsigned int clock_was_set_seq; > @@ -320,7 +320,7 @@ struct system_device_crosststamp { > * timekeeping code to verify comparibility of two cycle values > */ > struct system_counterval_t { > - cycle_t cycles; > + u64 cycles; > struct clocksource *cs; > }; > > --- a/include/linux/types.h > +++ b/include/linux/types.h > @@ -228,8 +228,5 @@ struct callback_head { > typedef void (*rcu_callback_t)(struct rcu_head *head); > typedef void (*call_rcu_func_t)(struct rcu_head *head, rcu_callback_t func); > > -/* clocksource cycle base type */ > -typedef u64 cycle_t; > - > #endif /* __ASSEMBLY__ */ > #endif /* _LINUX_TYPES_H */ > --- a/kernel/time/clocksource.c > +++ b/kernel/time/clocksource.c > @@ -169,7 +169,7 @@ void clocksource_mark_unstable(struct cl > static void clocksource_watchdog(unsigned long data) > { > struct clocksource *cs; > - cycle_t csnow, wdnow, cslast, wdlast, delta; > + u64 csnow, wdnow, cslast, wdlast, delta; > int64_t wd_nsec, cs_nsec; > int next_cpu, reset_pending; > > --- a/kernel/time/jiffies.c > +++ b/kernel/time/jiffies.c > @@ -59,9 +59,9 @@ > #define JIFFIES_SHIFT 8 > #endif > > -static cycle_t jiffies_read(struct clocksource *cs) > +static u64 jiffies_read(struct clocksource *cs) > { > - return (cycle_t) jiffies; > + return (u64) jiffies; > } > > static struct clocksource clocksource_jiffies = { > --- a/kernel/time/timecounter.c > +++ b/kernel/time/timecounter.c > @@ -43,7 +43,7 @@ EXPORT_SYMBOL_GPL(timecounter_init); > */ > static u64 timecounter_read_delta(struct timecounter *tc) > { > - cycle_t cycle_now, cycle_delta; > + u64 cycle_now, cycle_delta; > u64 ns_offset; > > /* read cycle counter: */ > @@ -80,7 +80,7 @@ EXPORT_SYMBOL_GPL(timecounter_read); > * time previous to the time stored in the cycle counter. > */ > static u64 cc_cyc2ns_backwards(const struct cyclecounter *cc, > - cycle_t cycles, u64 mask, u64 frac) > + u64 cycles, u64 mask, u64 frac) > { > u64 ns = (u64) cycles; > > @@ -90,7 +90,7 @@ static u64 cc_cyc2ns_backwards(const str > } > > u64 timecounter_cyc2time(struct timecounter *tc, > - cycle_t cycle_tstamp) > + u64 cycle_tstamp) > { > u64 delta = (cycle_tstamp - tc->cycle_last) & tc->cc->mask; > u64 nsec = tc->nsec, frac = tc->frac; > --- a/kernel/time/timekeeping.c > +++ b/kernel/time/timekeeping.c > @@ -119,10 +119,10 @@ static inline void tk_update_sleep_time( > #ifdef CONFIG_DEBUG_TIMEKEEPING > #define WARNING_FREQ (HZ*300) /* 5 minute rate-limiting */ > > -static void timekeeping_check_update(struct timekeeper *tk, cycle_t offset) > +static void timekeeping_check_update(struct timekeeper *tk, u64 offset) > { > > - cycle_t max_cycles = tk->tkr_mono.clock->max_cycles; > + u64 max_cycles = tk->tkr_mono.clock->max_cycles; > const char *name = tk->tkr_mono.clock->name; > > if (offset > max_cycles) { > @@ -158,10 +158,10 @@ static void timekeeping_check_update(str > } > } > > -static inline cycle_t timekeeping_get_delta(struct tk_read_base *tkr) > +static inline u64 timekeeping_get_delta(struct tk_read_base *tkr) > { > struct timekeeper *tk = &tk_core.timekeeper; > - cycle_t now, last, mask, max, delta; > + u64 now, last, mask, max, delta; > unsigned int seq; > > /* > @@ -199,12 +199,12 @@ static inline cycle_t timekeeping_get_de > return delta; > } > #else > -static inline void timekeeping_check_update(struct timekeeper *tk, cycle_t offset) > +static inline void timekeeping_check_update(struct timekeeper *tk, u64 offset) > { > } > -static inline cycle_t timekeeping_get_delta(struct tk_read_base *tkr) > +static inline u64 timekeeping_get_delta(struct tk_read_base *tkr) > { > - cycle_t cycle_now, delta; > + u64 cycle_now, delta; > > /* read clocksource */ > cycle_now = tkr->read(tkr->clock); > @@ -229,7 +229,7 @@ static inline cycle_t timekeeping_get_de > */ > static void tk_setup_internals(struct timekeeper *tk, struct clocksource *clock) > { > - cycle_t interval; > + u64 interval; > u64 tmp, ntpinterval; > struct clocksource *old_clock; > > @@ -254,7 +254,7 @@ static void tk_setup_internals(struct ti > if (tmp == 0) > tmp = 1; > > - interval = (cycle_t) tmp; > + interval = (u64) tmp; > tk->cycle_interval = interval; > > /* Go back from cycles -> shifted ns */ > @@ -346,16 +346,16 @@ static inline u64 timekeeping_delta_to_n > > static inline u64 timekeeping_get_ns(struct tk_read_base *tkr) > { > - cycle_t delta; > + u64 delta; > > delta = timekeeping_get_delta(tkr); > return timekeeping_delta_to_ns(tkr, delta); > } > > static inline u64 timekeeping_cycles_to_ns(struct tk_read_base *tkr, > - cycle_t cycles) > + u64 cycles) > { > - cycle_t delta; > + u64 delta; > > /* calculate the delta since the last update_wall_time */ > delta = clocksource_delta(cycles, tkr->cycle_last, tkr->mask); > @@ -459,9 +459,9 @@ u64 ktime_get_raw_fast_ns(void) > EXPORT_SYMBOL_GPL(ktime_get_raw_fast_ns); > > /* Suspend-time cycles value for halted fast timekeeper. */ > -static cycle_t cycles_at_suspend; > +static u64 cycles_at_suspend; > > -static cycle_t dummy_clock_read(struct clocksource *cs) > +static u64 dummy_clock_read(struct clocksource *cs) > { > return cycles_at_suspend; > } > @@ -655,7 +655,7 @@ static void timekeeping_update(struct ti > static void timekeeping_forward_now(struct timekeeper *tk) > { > struct clocksource *clock = tk->tkr_mono.clock; > - cycle_t cycle_now, delta; > + u64 cycle_now, delta; > u64 nsec; > > cycle_now = tk->tkr_mono.read(clock); > @@ -928,7 +928,7 @@ void ktime_get_snapshot(struct system_ti > ktime_t base_real; > u64 nsec_raw; > u64 nsec_real; > - cycle_t now; > + u64 now; > > WARN_ON_ONCE(timekeeping_suspended); > > @@ -987,8 +987,8 @@ static int scale64_check_overflow(u64 mu > * interval is partial_history_cycles. > */ > static int adjust_historical_crosststamp(struct system_time_snapshot *history, > - cycle_t partial_history_cycles, > - cycle_t total_history_cycles, > + u64 partial_history_cycles, > + u64 total_history_cycles, > bool discontinuity, > struct system_device_crosststamp *ts) > { > @@ -1052,7 +1052,7 @@ static int adjust_historical_crosststamp > /* > * cycle_between - true if test occurs chronologically between before and after > */ > -static bool cycle_between(cycle_t before, cycle_t test, cycle_t after) > +static bool cycle_between(u64 before, u64 test, u64 after) > { > if (test > before && test < after) > return true; > @@ -1082,7 +1082,7 @@ int get_device_system_crosststamp(int (* > { > struct system_counterval_t system_counterval; > struct timekeeper *tk = &tk_core.timekeeper; > - cycle_t cycles, now, interval_start; > + u64 cycles, now, interval_start; > unsigned int clock_was_set_seq = 0; > ktime_t base_real, base_raw; > u64 nsec_real, nsec_raw; > @@ -1143,7 +1143,7 @@ int get_device_system_crosststamp(int (* > * current interval > */ > if (do_interp) { > - cycle_t partial_history_cycles, total_history_cycles; > + u64 partial_history_cycles, total_history_cycles; > bool discontinuity; > > /* > @@ -1649,7 +1649,7 @@ void timekeeping_resume(void) > struct clocksource *clock = tk->tkr_mono.clock; > unsigned long flags; > struct timespec64 ts_new, ts_delta; > - cycle_t cycle_now; > + u64 cycle_now; > > sleeptime_injected = false; > read_persistent_clock64(&ts_new); > @@ -2015,11 +2015,11 @@ static inline unsigned int accumulate_ns > * > * Returns the unconsumed cycles. > */ > -static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset, > +static u64 logarithmic_accumulation(struct timekeeper *tk, u64 offset, > u32 shift, > unsigned int *clock_set) > { > - cycle_t interval = tk->cycle_interval << shift; > + u64 interval = tk->cycle_interval << shift; > u64 raw_nsecs; > > /* If the offset is smaller than a shifted interval, do nothing */ > @@ -2060,7 +2060,7 @@ void update_wall_time(void) > { > struct timekeeper *real_tk = &tk_core.timekeeper; > struct timekeeper *tk = &shadow_timekeeper; > - cycle_t offset; > + u64 offset; > int shift = 0, maxshift; > unsigned int clock_set = 0; > unsigned long flags; > --- a/kernel/time/timekeeping_internal.h > +++ b/kernel/time/timekeeping_internal.h > @@ -13,9 +13,9 @@ extern void tk_debug_account_sleep_time( > #endif > > #ifdef CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE > -static inline cycle_t clocksource_delta(cycle_t now, cycle_t last, cycle_t mask) > +static inline u64 clocksource_delta(u64 now, u64 last, u64 mask) > { > - cycle_t ret = (now - last) & mask; > + u64 ret = (now - last) & mask; > > /* > * Prevent time going backwards by checking the MSB of mask in > @@ -24,7 +24,7 @@ static inline cycle_t clocksource_delta( > return ret & ~(mask >> 1) ? 0 : ret; > } > #else > -static inline cycle_t clocksource_delta(cycle_t now, cycle_t last, cycle_t mask) > +static inline u64 clocksource_delta(u64 now, u64 last, u64 mask) > { > return (now - last) & mask; > } > --- a/kernel/trace/ftrace.c > +++ b/kernel/trace/ftrace.c > @@ -2847,7 +2847,7 @@ static void ftrace_shutdown_sysctl(void) > } > } > > -static cycle_t ftrace_update_time; > +static u64 ftrace_update_time; > unsigned long ftrace_update_tot_cnt; > > static inline int ops_traces_mod(struct ftrace_ops *ops) > @@ -2894,7 +2894,7 @@ static int ftrace_update_code(struct mod > { > struct ftrace_page *pg; > struct dyn_ftrace *p; > - cycle_t start, stop; > + u64 start, stop; > unsigned long update_cnt = 0; > unsigned long rec_flags = 0; > int i; > --- a/kernel/trace/trace.c > +++ b/kernel/trace/trace.c > @@ -234,7 +234,7 @@ static int __init set_tracepoint_printk( > } > __setup("tp_printk", set_tracepoint_printk); > > -unsigned long long ns2usecs(cycle_t nsec) > +unsigned long long ns2usecs(u64 nsec) > { > nsec += 500; > do_div(nsec, 1000); > @@ -571,7 +571,7 @@ int trace_pid_write(struct trace_pid_lis > return read; > } > > -static cycle_t buffer_ftrace_now(struct trace_buffer *buf, int cpu) > +static u64 buffer_ftrace_now(struct trace_buffer *buf, int cpu) > { > u64 ts; > > @@ -585,7 +585,7 @@ static cycle_t buffer_ftrace_now(struct > return ts; > } > > -cycle_t ftrace_now(int cpu) > +u64 ftrace_now(int cpu) > { > return buffer_ftrace_now(&global_trace.trace_buffer, cpu); > } > --- a/kernel/trace/trace.h > +++ b/kernel/trace/trace.h > @@ -157,7 +157,7 @@ struct trace_array_cpu { > unsigned long policy; > unsigned long rt_priority; > unsigned long skipped_entries; > - cycle_t preempt_timestamp; > + u64 preempt_timestamp; > pid_t pid; > kuid_t uid; > char comm[TASK_COMM_LEN]; > @@ -175,7 +175,7 @@ struct trace_buffer { > struct trace_array *tr; > struct ring_buffer *buffer; > struct trace_array_cpu __percpu *data; > - cycle_t time_start; > + u64 time_start; > int cpu; > }; > > @@ -686,7 +686,7 @@ static inline void __trace_stack(struct > } > #endif /* CONFIG_STACKTRACE */ > > -extern cycle_t ftrace_now(int cpu); > +extern u64 ftrace_now(int cpu); > > extern void trace_find_cmdline(int pid, char comm[]); > extern void trace_event_follow_fork(struct trace_array *tr, bool enable); > @@ -733,7 +733,7 @@ extern int trace_selftest_startup_branch > #endif /* CONFIG_FTRACE_STARTUP_TEST */ > > extern void *head_page(struct trace_array_cpu *data); > -extern unsigned long long ns2usecs(cycle_t nsec); > +extern unsigned long long ns2usecs(u64 nsec); > extern int > trace_vbprintk(unsigned long ip, const char *fmt, va_list args); > extern int > --- a/kernel/trace/trace_irqsoff.c > +++ b/kernel/trace/trace_irqsoff.c > @@ -286,7 +286,7 @@ static void irqsoff_print_header(struct > /* > * Should this new latency be reported/recorded? > */ > -static bool report_latency(struct trace_array *tr, cycle_t delta) > +static bool report_latency(struct trace_array *tr, u64 delta) > { > if (tracing_thresh) { > if (delta < tracing_thresh) > @@ -304,7 +304,7 @@ check_critical_timing(struct trace_array > unsigned long parent_ip, > int cpu) > { > - cycle_t T0, T1, delta; > + u64 T0, T1, delta; > unsigned long flags; > int pc; > > --- a/kernel/trace/trace_sched_wakeup.c > +++ b/kernel/trace/trace_sched_wakeup.c > @@ -346,7 +346,7 @@ static void wakeup_print_header(struct s > /* > * Should this new latency be reported/recorded? > */ > -static bool report_latency(struct trace_array *tr, cycle_t delta) > +static bool report_latency(struct trace_array *tr, u64 delta) > { > if (tracing_thresh) { > if (delta < tracing_thresh) > @@ -428,7 +428,7 @@ probe_wakeup_sched_switch(void *ignore, > struct task_struct *prev, struct task_struct *next) > { > struct trace_array_cpu *data; > - cycle_t T0, T1, delta; > + u64 T0, T1, delta; > unsigned long flags; > long disabled; > int cpu; > --- a/sound/hda/hdac_stream.c > +++ b/sound/hda/hdac_stream.c > @@ -465,7 +465,7 @@ int snd_hdac_stream_set_params(struct hd > } > EXPORT_SYMBOL_GPL(snd_hdac_stream_set_params); > > -static cycle_t azx_cc_read(const struct cyclecounter *cc) > +static u64 azx_cc_read(const struct cyclecounter *cc) > { > struct hdac_stream *azx_dev = container_of(cc, struct hdac_stream, cc); > > @@ -473,7 +473,7 @@ static cycle_t azx_cc_read(const struct > } > > static void azx_timecounter_init(struct hdac_stream *azx_dev, > - bool force, cycle_t last) > + bool force, u64 last) > { > struct timecounter *tc = &azx_dev->tc; > struct cyclecounter *cc = &azx_dev->cc; > @@ -523,7 +523,7 @@ void snd_hdac_stream_timecounter_init(st > struct snd_pcm_runtime *runtime = azx_dev->substream->runtime; > struct hdac_stream *s; > bool inited = false; > - cycle_t cycle_last = 0; > + u64 cycle_last = 0; > int i = 0; > > list_for_each_entry(s, &bus->stream_list, list) { > --- a/virt/kvm/arm/arch_timer.c > +++ b/virt/kvm/arm/arch_timer.c > @@ -39,7 +39,7 @@ void kvm_timer_vcpu_put(struct kvm_vcpu > vcpu->arch.timer_cpu.active_cleared_last = false; > } > > -static cycle_t kvm_phys_timer_read(void) > +static u64 kvm_phys_timer_read(void) > { > return timecounter->cc->read(timecounter->cc); > } > @@ -102,7 +102,7 @@ static void kvm_timer_inject_irq_work(st > > static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu) > { > - cycle_t cval, now; > + u64 cval, now; > > cval = vcpu->arch.timer_cpu.cntv_cval; > now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff; > @@ -155,7 +155,7 @@ static bool kvm_timer_irq_can_fire(struc > bool kvm_timer_should_fire(struct kvm_vcpu *vcpu) > { > struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu; > - cycle_t cval, now; > + u64 cval, now; > > if (!kvm_timer_irq_can_fire(vcpu)) > return false; > > -- David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 819 bytes --] ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [patch 0/6] timekeeping: Cure the signed/unsigned wreckage 2016-12-08 20:49 [patch 0/6] timekeeping: Cure the signed/unsigned wreckage Thomas Gleixner ` (5 preceding siblings ...) 2016-12-08 20:49 ` [patch 6/6] [RFD] timekeeping: Get rid of cycle_t Thomas Gleixner @ 2016-12-09 4:52 ` John Stultz 2016-12-09 5:30 ` Peter Zijlstra 7 siblings, 0 replies; 35+ messages in thread From: John Stultz @ 2016-12-09 4:52 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, Peter Zijlstra, Ingo Molnar, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall On Thu, Dec 8, 2016 at 12:49 PM, Thomas Gleixner <tglx@linutronix.de> wrote: > This series addresses the recently reintroduced signed vs. unsigned > wreckage by cleaning up the whole call chain instead of just making a > simple s64 -> u64 'fix' at one point and keeping the rest signed, which > eventually led to the unintended signed conversion and brought back an > issue that was fixed a year ago already. > > Here is the queue: > > timekeeping: Force unsigned clocksource to nanoseconds conversions > timekeeping: Make the conversion call chain consistently unsigned > timekeeping: Get rid of pointless typecasts > > These three patches are definitely urgent material > > timekeeping: Use mul_u64_u32_shr() instead of open coding it Thanks for putting these together Thomas! So I'm happy with the set above. > Can wait for 4.11, but for sanity reasons it should go into 4.10 > > [RFD] timekeeping: Provide optional 128bit math > > This is material for discussion. I'm not sure if we want to do that at > all, but it addresses the insanities of long time scheduled out VMs. Yea. Here I feel like there has to be some bound after which we don't function when we're starved of interrupts. On some systems it will be the hardware clocksource wrapping, on other systems its the multiplication overflowing. I think we should avoid the system failing critically (which the initial patches address), as there are cases like halting the system via kdb or freezing a VM for a long period of time (hosts suspending is an example), but having a smallish time inconsistency event in this case doesn't seem tragic to me. Providing a config option for folks who want robust time correctness in the event of insane system scheduling/interrupt latency isn't something I object to, but I worry it will just push the boundary of what is "expected broken by design" out further (why bother suspending/resuming the timekeeping subsystem when you can just starve it, etc). thanks -john ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [patch 0/6] timekeeping: Cure the signed/unsigned wreckage 2016-12-08 20:49 [patch 0/6] timekeeping: Cure the signed/unsigned wreckage Thomas Gleixner ` (6 preceding siblings ...) 2016-12-09 4:52 ` [patch 0/6] timekeeping: Cure the signed/unsigned wreckage John Stultz @ 2016-12-09 5:30 ` Peter Zijlstra 7 siblings, 0 replies; 35+ messages in thread From: Peter Zijlstra @ 2016-12-09 5:30 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, John Stultz, Ingo Molnar, David Gibson, Liav Rehana, Chris Metcalf, Richard Cochran, Parit Bhargava, Laurent Vivier, Christopher S. Hall On Thu, Dec 08, 2016 at 08:49:31PM -0000, Thomas Gleixner wrote: > Here is the queue: > > timekeeping: Force unsigned clocksource to nanoseconds conversions > timekeeping: Make the conversion call chain consistently unsigned > timekeeping: Get rid of pointless typecasts > > These three patches are definitely urgent material > > timekeeping: Use mul_u64_u32_shr() instead of open coding it > Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> ^ permalink raw reply [flat|nested] 35+ messages in thread
end of thread, other threads:[~2017-01-14 12:52 UTC | newest] Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2016-12-08 20:49 [patch 0/6] timekeeping: Cure the signed/unsigned wreckage Thomas Gleixner 2016-12-08 20:49 ` [patch 1/6] timekeeping: Force unsigned clocksource to nanoseconds conversion Thomas Gleixner 2016-12-08 23:38 ` David Gibson 2016-12-09 11:13 ` [tip:timers/core] timekeeping_Force_unsigned_clocksource_to_nanoseconds_conversion tip-bot for Thomas Gleixner 2016-12-08 20:49 ` [patch 2/6] timekeeping: Make the conversion call chain consistently unsigned Thomas Gleixner 2016-12-08 23:39 ` David Gibson 2016-12-09 11:13 ` [tip:timers/core] " tip-bot for Thomas Gleixner 2016-12-08 20:49 ` [patch 3/6] timekeeping: Get rid of pointless typecasts Thomas Gleixner 2016-12-08 23:40 ` David Gibson 2016-12-09 11:14 ` [tip:timers/core] " tip-bot for Thomas Gleixner 2016-12-08 20:49 ` [patch 4/6] timekeeping: Use mul_u64_u32_shr() instead of open coding it Thomas Gleixner 2016-12-08 23:41 ` David Gibson 2016-12-09 11:14 ` [tip:timers/core] " tip-bot for Thomas Gleixner 2016-12-08 20:49 ` [patch 5/6] [RFD] timekeeping: Provide optional 128bit math Thomas Gleixner 2016-12-09 4:08 ` Ingo Molnar 2016-12-09 4:29 ` Ingo Molnar 2016-12-09 4:39 ` John Stultz 2016-12-09 4:48 ` Peter Zijlstra 2016-12-09 5:22 ` Ingo Molnar 2016-12-09 5:41 ` Peter Zijlstra 2016-12-09 5:11 ` Peter Zijlstra 2016-12-09 6:08 ` Peter Zijlstra 2016-12-09 5:26 ` Peter Zijlstra 2016-12-09 6:38 ` Peter Zijlstra 2016-12-09 8:30 ` Peter Zijlstra 2016-12-09 9:11 ` Peter Zijlstra 2016-12-09 10:01 ` Peter Zijlstra 2016-12-09 17:32 ` Chris Metcalf 2017-01-14 12:51 ` [tip:timers/core] math64, timers: Fix 32bit mul_u64_u32_shr() and friends tip-bot for Peter Zijlstra 2016-12-09 10:18 ` [patch 5/6] [RFD] timekeeping: Provide optional 128bit math Peter Zijlstra 2016-12-09 17:20 ` Chris Metcalf 2016-12-08 20:49 ` [patch 6/6] [RFD] timekeeping: Get rid of cycle_t Thomas Gleixner 2016-12-08 23:43 ` David Gibson 2016-12-09 4:52 ` [patch 0/6] timekeeping: Cure the signed/unsigned wreckage John Stultz 2016-12-09 5:30 ` Peter Zijlstra
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).