Hey guys, I have made an interesting (and unexpected) observation in relation to the drift between the (system) time returned by gettimeofday() system call and time obtained from time stamp counter (defined as TSC reading divided by CPU frequency). I was wondering if anyone could help me explain this: Let me first explain my system: I have 3 Hardware based vitual machines (HVMs) running on Linux 2.6.24-26-generic (tickless) kernel. The timer mode is 1 (default, virtual time is always wallclock time). I ran an experiment in which I compared the *difference between the values retured by rdtsc() and gettimeofday on the three domains* against realtime (from a third independent source). There is no NTP sync on either the control domain or any of the user domains. The scheduler weights and cap values for all domains (domain-0 and user domains) are default 256 and 0 respectively. I observe that: 1. Over time the drift between TSC-time and gettimeofday time increases (at a constant rate). This is expected because considering that TSC and gettimeofday are supposed to derive their values from different physical counters there will be some drift. 2. But what is suprising is that the rate is different on all three domains. Now this is something that is puzzling me. If I understand the virtualization architecture correctly, read shadows are created for each user domain which are updated by domain-0. Read access to TSC will return a value from these shadow tables. And since I am using the timer mode = 1 , I expect that system time will also be same on all domains. Which means that time difference between TSC time and system time should increase by the same amount on all domains which is not what I observe. Can some body give me a pointer to what I am missing here. Has anyone else observed this behavior? Thanks! --pr