xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Joao Martins <joao.m.martins@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 1/8] x86/time: improve cross-CPU clock monotonicity (and more)
Date: Tue, 21 Jun 2016 14:57:00 +0100	[thread overview]
Message-ID: <576947AC.4040904@oracle.com> (raw)
In-Reply-To: <57694EF502000078000F735A@prv-mh.provo.novell.com>



On 06/21/2016 01:28 PM, Jan Beulich wrote:
>>>> On 21.06.16 at 14:05, <joao.m.martins@oracle.com> wrote:
> 
>>
>> On 06/17/2016 08:32 AM, Jan Beulich wrote:
>>>>>> On 16.06.16 at 22:27, <joao.m.martins@oracle.com> wrote:
>>>>> I.e. my plan was, once the backwards moves are small enough, to maybe
>>>>> indeed compensate them by maintaining a global variable tracking
>>>>> the most recently returned value. There are issues with such an
>>>>> approach too, though: HT effects can result in one hyperthread
>>>>> making it just past that check of the global, then hardware
>>>>> switching to the other hyperthread, NOW() producing a slightly
>>>>> larger value there, and hardware switching back to the first
>>>>> hyperthread only after the second one consumed the result of
>>>>> NOW(). Dario's use would be unaffected by this aiui, as his NOW()
>>>>> invocations are globally serialized through a spinlock, but arbitrary
>>>>> NOW() invocations on two hyperthreads can't be made such that
>>>>> the invoking party can be guaranteed to see strictly montonic
>>>>> values.
>>>>>
>>>>> And btw., similar considerations apply for two fully independent
>>>>> CPUs, if one runs at a much higher P-state than the other (i.e.
>>>>> the faster one could overtake the slower one between the
>>>>> montonicity check in NOW() and the callers consuming the returned
>>>>> values). So in the end I'm not sure it's worth the performance hit
>>>>> such a global montonicity check would incur, and therefore I didn't
>>>>> make a respective patch part of this series.
>>>>>
>>>>
>>>> Hm, guests pvclock should have faced similar issues too as their
>>>> local stamps for each vcpu diverge. Linux commit 489fb49 ("x86, paravirt: 
>>>> Add a
>>>> global synchronization point for pvclock") depicts a fix to similar 
>>>> situations to the
>>>> scenarios you just described - which lead to have a global variable to keep 
>>>> track of
>>>> most recent timestamp. One important chunk of that commit is pasted below 
>>>> for
>>>> convenience:
>>>>
>>>> --
>>>> /*
>>>>  * Assumption here is that last_value, a global accumulator, always goes
>>>>  * forward. If we are less than that, we should not be much smaller.
>>>>  * We assume there is an error marging we're inside, and then the correction
>>>>  * does not sacrifice accuracy.
>>>>  *
>>>>  * For reads: global may have changed between test and return,
>>>>  * but this means someone else updated poked the clock at a later time.
>>>>  * We just need to make sure we are not seeing a backwards event.
>>>>  *
>>>>  * For updates: last_value = ret is not enough, since two vcpus could be
>>>>  * updating at the same time, and one of them could be slightly behind,
>>>>  * making the assumption that last_value always go forward fail to hold.
>>>>  */
>>>>  last = atomic64_read(&last_value);
>>>>  do {
>>>>      if (ret < last)
>>>>          return last;
>>>>      last = atomic64_cmpxchg(&last_value, last, ret);
>>>>  } while (unlikely(last != ret));
>>>> --
>>>
>>> Meaning they decided it's worth the overhead. But (having read
>>> through the entire description) they don't even discuss whether this
>>> indeed eliminates _all_ apparent backward moves due to effects
>>> like the ones named above.
>>>
>>> Plus, the contention they're facing is limited to a single VM, i.e. likely
>>> much more narrow than that on an entire physical system. So for
>>> us to do the same in the hypervisor, quite a bit more of win would
>>> be needed to outweigh the cost.
>>>
>> Experimental details look very unclear too - likely running the time
>> warp test for 5 days would get some of these cases cleared out. But
>> as you say it should be much more narrow that of an entire system.
>>
>> BTW It was implicit in the discussion but my apologies for not
>> formally/explicitly stating. So FWIW:
>>
>> Tested-by: Joao Martins <joao.m.martins@oracle.com>
> 
> Thanks, but this ...
> 
>> This series is certainly a way forward into improving cross-CPU monotonicity,
>> and I am seeing indeed less occurrences of time going backwards on my 
>> systems.
> 
> ... leaves me guessing whether the above was meant for just this
> patch, or the entire series.
> 
Ah sorry, a little ambiguous on my end - It is meant for the entire series.

Joao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-06-21 13:56 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-15  9:50 [PATCH 0/8] x86/time: improve cross-CPU clock monotonicity (and more) Jan Beulich
2016-06-15 10:26 ` [PATCH 1/8] " Jan Beulich
2016-06-15 10:32   ` Jan Beulich
2016-06-15 22:51   ` Joao Martins
2016-06-16  8:27     ` Jan Beulich
2016-06-16 20:27       ` Joao Martins
2016-06-17  7:32         ` Jan Beulich
2016-06-21 12:05           ` Joao Martins
2016-06-21 12:28             ` Jan Beulich
2016-06-21 13:57               ` Joao Martins [this message]
2016-08-02 19:30   ` Andrew Cooper
2016-06-15 10:26 ` [PATCH 2/8] x86: also generate assembler usable equates for synthesized features Jan Beulich
2016-06-20 12:50   ` Andrew Cooper
2016-06-15 10:27 ` [PATCH 3/8] x86/time: introduce and use rdtsc_ordered() Jan Beulich
2016-06-20 12:59   ` Andrew Cooper
2016-06-20 13:06     ` Jan Beulich
2016-06-20 13:07       ` Andrew Cooper
2016-07-11 11:39     ` Dario Faggioli
2016-06-15 10:28 ` [PATCH 4/8] x86/time: calibrate TSC against platform timer Jan Beulich
2016-06-20 14:20   ` Andrew Cooper
2016-06-20 15:19     ` Jan Beulich
2016-08-02 19:25       ` Andrew Cooper
2016-08-03  9:32         ` Jan Beulich
2016-06-15 10:28 ` [PATCH 5/8] x86/time: correctly honor late clearing of TSC related feature flags Jan Beulich
2016-06-20 14:32   ` Andrew Cooper
2016-06-20 15:20     ` Jan Beulich
2016-07-04 15:39       ` Andrew Cooper
2016-07-04 15:53         ` Jan Beulich
2016-08-02 19:08           ` Andrew Cooper
2016-08-03  9:43             ` Jan Beulich
2016-08-31 13:42               ` Andrew Cooper
2016-08-31 14:03                 ` Jan Beulich
2016-06-15 10:29 ` [PATCH 6/8] x86/time: support 32-bit wide ACPI PM timer Jan Beulich
2016-07-04 15:40   ` Andrew Cooper
2016-06-15 10:30 ` [PATCH 7/8] x86/time: fold recurring code Jan Beulich
2016-07-04 15:43   ` Andrew Cooper
2016-06-15 10:30 ` [PATCH 8/8] x86/time: group time stamps into a structure Jan Beulich
2016-07-04 15:57   ` Andrew Cooper
2016-07-01  7:44 ` Ping: [PATCH 0/8] x86/time: improve cross-CPU clock monotonicity (and more) Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=576947AC.4040904@oracle.com \
    --to=joao.m.martins@oracle.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dario.faggioli@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).