All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [Xenomai] userspace absolute timer value
@ 2015-12-23 18:05 Steven Seeger
  2015-12-23 18:43 ` Gilles Chanteperdrix
  0 siblings, 1 reply; 5+ messages in thread
From: Steven Seeger @ 2015-12-23 18:05 UTC (permalink / raw)
  To: xenomai

All,

The issue that I had with userspace absolute time to start a timer (what 
latency test does) was due to a quirk on my board where the powerpc timebase 
was coming up as 0xdXXXXXXXXXXXXXXX which was causing the 32-bit userland to 
lose precision when getting the monotonic clock value. The latency test gets 
the time, adds a millisecond, and uses this time to start the process. However 
on my machine the time was way off due to the loss of precision. (there were 
more than 2^32 seconds, but time_t is only 32-bit) On my board adding some 
code to set the timebase to 0 in head_44x.S and that cleared up all the 
issues. Everything is working for me now. This appears to be a problem with 
how cobalt deals with 64-bit ns counters and 32-bit userspace clocks, however 
I could be missing something.

Steven



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Xenomai] userspace absolute timer value
  2015-12-23 18:05 [Xenomai] userspace absolute timer value Steven Seeger
@ 2015-12-23 18:43 ` Gilles Chanteperdrix
  2015-12-23 19:14   ` Steven Seeger
  0 siblings, 1 reply; 5+ messages in thread
From: Gilles Chanteperdrix @ 2015-12-23 18:43 UTC (permalink / raw)
  To: Steven Seeger; +Cc: xenomai

On Wed, Dec 23, 2015 at 01:05:00PM -0500, Steven Seeger wrote:
> All,
> 
> The issue that I had with userspace absolute time to start a timer (what 
> latency test does) was due to a quirk on my board where the powerpc timebase 
> was coming up as 0xdXXXXXXXXXXXXXXX which was causing the 32-bit userland to 
> lose precision when getting the monotonic clock value. The latency test gets 
> the time, adds a millisecond, and uses this time to start the process. However 
> on my machine the time was way off due to the loss of precision. (there were 
> more than 2^32 seconds, but time_t is only 32-bit) On my board adding some 
> code to set the timebase to 0 in head_44x.S and that cleared up all the 
> issues. Everything is working for me now. This appears to be a problem with 
> how cobalt deals with 64-bit ns counters and 32-bit userspace clocks, however 
> I could be missing something.

If I understand correctly, your problem is that struct timespec
tv_sec member has 32 bits. Well, I am afraid there is not much we
can do about that (I heard mainline has a plan to switch to a new
timespec with a 64 bits tv_sec, but I do not know how much of that
plan has been implemented).

Can you not call clock_settime to set a wallclock offset which will
at least allow CLOCK_REALTIME to behave as expected ?

Regards.

-- 
					    Gilles.
https://click-hack.org


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Xenomai] userspace absolute timer value
  2015-12-23 18:43 ` Gilles Chanteperdrix
@ 2015-12-23 19:14   ` Steven Seeger
  0 siblings, 0 replies; 5+ messages in thread
From: Steven Seeger @ 2015-12-23 19:14 UTC (permalink / raw)
  To: Gilles Chanteperdrix; +Cc: xenomai

On Wednesday, December 23, 2015 19:43:41 Gilles Chanteperdrix wrote:
> If I understand correctly, your problem is that struct timespec
> tv_sec member has 32 bits. Well, I am afraid there is not much we
> can do about that (I heard mainline has a plan to switch to a new
> timespec with a 64 bits tv_sec, but I do not know how much of that
> plan has been implemented).

Yes, this is exactly my problem.

> 
> Can you not call clock_settime to set a wallclock offset which will
> at least allow CLOCK_REALTIME to behave as expected ?

The issue is with the testsuite/latency app. It uses 
clock_gettime(CLOCK_MONOTONIC) and adds a millisecond to that value and then 
uses that as the absolute start time of the latency thread. All calculates are 
based off this primed value. 

There really is no reason for my board to come up with such a ridiculous 
timebase value. I have no idea why it does that. I set it to 0 very early in 
the kernel boot cycle and it fixed the issue. (This board is loaded via jtag so 
there may be some weirdness there.) This fix will last 136 years, right? :) My 
point was just that if the timebase is not a reasonable value I think this bug 
will manifest. 

IMHO there is no benefit to allowing us to say we want some task to start in 
the year 500,000,000,000 so there isn't really a need for such large numbers 
in this one use-case.

Your idea of a fix is essentially correct, and should work across all systems. 
However, I was trying to run the standard latency app which should also work 
across all systems! :)

Steven



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Xenomai] userspace absolute timer value
  2015-12-15  1:19 Steven Seeger
@ 2015-12-22 16:59 ` Philippe Gerum
  0 siblings, 0 replies; 5+ messages in thread
From: Philippe Gerum @ 2015-12-22 16:59 UTC (permalink / raw)
  To: steven.seeger, xenomai

On 12/15/2015 02:19 AM, Steven Seeger wrote:
> Since my last post I seem to have solved the issues with my ppc44x board hard 
> locking up. I've relayed this info to Philippe and hopefully he will confirm 
> that I'm correct and that I should make a patch.

Confirmed, the bug is real and ugly, the fix is nice and right. Please
send a patch over.

Thanks,

-- 
Philippe.


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Xenomai] userspace absolute timer value
@ 2015-12-15  1:19 Steven Seeger
  2015-12-22 16:59 ` Philippe Gerum
  0 siblings, 1 reply; 5+ messages in thread
From: Steven Seeger @ 2015-12-15  1:19 UTC (permalink / raw)
  To: xenomai

Since my last post I seem to have solved the issues with my ppc44x board hard 
locking up. I've relayed this info to Philippe and hopefully he will confirm 
that I'm correct and that I should make a patch. However in the process, I've 
stopped seeing the latency -t1 and latency -t2 work correctly.

One thing I do notice now with latency -t0 is that the timerfd_handler in 
/proc/xenomai/timer/coreclk shows a tremendous number of seconds (1bil+) and 
you can keep printing the output and watching it count down a second at a 
time. This means there may be some kind of discrepancy between the 
CLOCK_MONOTONIC and the timer that's used to program shots.

I did look at the ticks for the coreclock and it appears to be 400 ticks per 
microsecond which is what the cobalt core is reporting via 
xnclock_ns_to_ticks() (I pass it 1000 ns and get 400 as a result)

Can anyone point me in the direction of where to look for this issue?

Thanks,
Steven



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2015-12-23 19:14 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-23 18:05 [Xenomai] userspace absolute timer value Steven Seeger
2015-12-23 18:43 ` Gilles Chanteperdrix
2015-12-23 19:14   ` Steven Seeger
  -- strict thread matches above, loose matches on Subject: below --
2015-12-15  1:19 Steven Seeger
2015-12-22 16:59 ` Philippe Gerum

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.