On Tue, 2017-04-04 at 13:36 -0400, Luiz Capitulino wrote: >  > On further debugging this, I realized that I had overlooked > something: > the timer interrupt in this trace is not the tick, but cyclictest's > timer > (remember that the test-case consists of pinning cyclictest and a > task > hogging the CPU to the same CPU). > > I'm running cyclictest with -i 200. If I increase this to -i 1000, > then > I seem unable to reproduce the issue (caution: even with -i 200 it > doesn't always happen. But it does usually happen after I restart the > test-case a few times. However, I've never been able to reproduce > with -i 1000). > > Now, if it's really cyclictest that's causing the timer interrupts to > get aligned, I guess this might not have a solution? (note: I haven't > been able to reproduce this on bare-metal). With any sample (tick) based timekeeping, it is possible to construct workloads that avoid the sampling and result in skewed statistics as a result. However, given that local users can already DoS the system in all kinds of ways, skewed statistics are probably not that high up on the list of importance. If there were a way to do accurate accounting (true vtime accounting) without increasing the overhead of every syscall and interrupt noticeably, that might be worth it, but syscall overhead is likely to be a more important factor than the accuracy of statistics. I don't know if doing TSC reads and subtraction/addition only, and delaying the conversion to cputime until a later point would slow down system calls measurably, compared with reading jiffies and comparing it against a cached value of jiffies, nor do I know whether spending time implementing and testing that would be worthwhile :) -- All rights reversed