Hi Thomas - Thanks very much for your help & guidance in previous mail: RE: On 08/03/2018, Thomas Gleixner wrote: > > The right way to do that is to put the raw conversion values and the raw > seconds base value into the vdso data and implement the counterpart of > getrawmonotonic64(). And if that is done, then it can be done for _ALL_ > clocksources which support VDSO access and not just for the TSC. > I have done this now with a new patch, sent in mail with subject : '[PATCH v4.16-rc4 1/1] x86/vdso: on Intel, VDSO should handle CLOCK_MONOTONIC_RAW' which should address all the concerns you raise. > I already know how that works, really. I never doubted or meant to impugn that ! I am beginning to know a little how that works also, thanks in great part to your help last week - thanks for your patience. I was impatient last week to get access to low latency timers for a work project, and was trying to read the unadjusted clock . > instead of making completely false claims about the correctness of the kernel > timekeeping infrastructure. I really didn't mean to make any such claims - I'm sorry if I did . I was just trying to say that by the time the results of clock_gettime(CLOCK_MONOTONIC_RAW,&ts) were available to the caller they were not of much use because of the latencies often dwarfing the time differences . Anyway, I hope sometime you will consider putting such a patch in the kernel. I have developed a verson for ARM also, but that depends on making CNTPCT + CNTFRQ registers readable in user-space, which is not meant to be secure and is not normally done , but does work - but it is against the Texas Instruments (ti-linux) kernel and can be enabled with a new KConfig option, and brings latencies down from > 300ns to < 20ns . Maybe I should post that also to kernel.org, or to ti.com ? I have a separate patch for the vdso_tsc_calibration export of the tsc_khz and calibration which no longer returns pointers into the VDSO - I can post this as a patch if you like. Thanks & Best Regards, Jason Vas Dias