All of lore.kernel.org
 help / color / mirror / Atom feed
From: Joao Martins <joao.m.martins@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 1/8] x86/time: improve cross-CPU clock monotonicity (and more)
Date: Thu, 16 Jun 2016 21:27:16 +0100	[thread overview]
Message-ID: <57630BA4.30607@oracle.com> (raw)
In-Reply-To: <57627F0702000078000F590A@prv-mh.provo.novell.com>

>>> +    unsigned long flags;
>>> +    u64 tsc;
>>> +
>>> +    local_irq_save(flags);
>>> +    ap_bringup_ref.master_stime = read_platform_stime();
>>> +    tsc = rdtsc();
>>> +    local_irq_restore(flags);
>>> +
>>> +    ap_bringup_ref.local_stime = get_s_time_fixed(tsc);
>>> +}
>>> +
>>>  void init_percpu_time(void)
>>>  {
>>>      struct cpu_time *t = &this_cpu(cpu_time);
>>>      unsigned long flags;
>>> +    u64 tsc;
>>>      s_time_t now;
>>>  
>>>      /* Initial estimate for TSC rate. */
>>>      t->tsc_scale = per_cpu(cpu_time, 0).tsc_scale;
>>>  
>>>      local_irq_save(flags);
>>> -    t->local_tsc_stamp = rdtsc();
>>>      now = read_platform_stime();
>> Do we need to take another read from the platform timer here? We could
>> just reuse the one on ap_bringup_ref.master_stime. See also below.
> 
> Yes, for this model of approximation of the local stime delta by
> master stime delta we obviously need to read the master clock
> here again (or else we have no way to calculate the master
> delta, which we want to use as the correcting value).
Ah, correct.

>> So it would be possible that 
>> get_s_time(S0) on
>> CPU0 immediately followed by a get_s_time(S1) on CPU1 [S1 > S0] ends up 
>> returning a
>> value backwards IOW L0 + scale(S0 - T0) is bigger than L1 + scale(S1 - T1). 
>> That is
>> independently of the deltas being added on every calibration.
>>
>> So how about we do the seeding in another way?
>>
>> 1) Relying on individual CPU TSC like you do on CPU 0:
>>
>>      t->stamp.master_stime = now;
>> +    t->stamp.local_tsc = 0;
>> +    t->stamp.local_stime = 0;
>> +
>>      if ( boot_cpu_has(X86_FEATURE_CONSTANT_TSC) )
>>      {
>>          now = get_s_time_fixed(tsc);
>>      }
> 
> As said before elsewhere, such a model can only work if all TSCs
> start ticking at the same time. The latest this assumption breaks 
> is when a CPU gets physically hotplugged.
Agreed.

>> Or 2) taking into account the skew between platform timer and tsc we take on
>> init_percpu_time. Diff below based on this series:
>>
>> @@ -1394,11 +1508,14 @@ void init_percpu_time(void)
>>      t->tsc_scale = per_cpu(cpu_time, 0).tsc_scale;
>>
>>      local_irq_save(flags);
>> -    now = read_platform_stime();
>> +    now = ap_bringup_ref.master_stime;
>>      tsc = rdtsc_ordered();
>>      local_irq_restore(flags);
>>
>>      t->stamp.master_stime = now;
>> +    t->stamp.local_tsc = boot_tsc_stamp;
> 
> Again - this is invalid. It indeed works fine on a single socket system
> (where all TSCs start ticking at the same time), and gives really good
> results. But due to hotplug (and to a lesser degree multi-socket, but
> likely to a somewhat higher degree multi-node, systems) we just can't
> use boot_tsc_stamp as reference for APs.
It also gives really good results on my multi-socket system at least on my old Intel
multi-socket box (Westmere-based; TSC invariant is introduced on its predecessor:
Nehalem). Btw, Linux seems to take Intel boxes as having synchronized TSCs across
cores and sockets [0][1]. Though CPU hotplug is the troublesome case.

[0] arch/x86/kernel/tsc.c#L1082
[1] arch/x86/kernel/cpu/intel.c#L85

> 
>> +    t->stamp.local_stime = 0;
>> +
>>
>>      /*
>>       * To avoid a discontinuity (TSC and platform clock can't be expected
>>       * to be in perfect sync), initialization here needs to match up with
>> @@ -1406,10 +1523,12 @@ void init_percpu_time(void)
>>       */
>>      if ( boot_cpu_has(X86_FEATURE_CONSTANT_TSC) )
>>      {
>> +        u64 stime = get_s_time_fixed(tsc);
>> +
>>          if ( system_state < SYS_STATE_smp_boot )
>> -            now = get_s_time_fixed(tsc);
>> +            now = stime;
>>          else
>> -            now += ap_bringup_ref.local_stime - ap_bringup_ref.master_stime;
>> +            now += stime - ap_bringup_ref.master_stime;
> 
> That seems to contradict your desire to keep out of all calculations
> the skew between platform timer and TSC.
> 
That's indeed my desire (on my related series about using tsc as clocksource and tsc
stable bit), but being aware of TSC limitations: I was trying to see if there was
common ground between this seeding with platform timer while accounting to the
potential unreliable TSC of individual CPUs. But of course, for any of these
solutions to have get_s_time return monotonically increasing values, it can only work
on a truly invariant TSC box.

>>      }
>>
>> Second option follows a similar thinking of this patch, but on both ways the
>> local_stime will match the tsc on init_percpu_time, thus being a matched 
>> pair. I have
>> a test similar to check_tsc_warp but with get_s_time() and I no longer 
>> observe time
>> going backwards. But without the above I still observe it even on short 
>> periods.
> 
> Right - I'm not claiming the series eliminates all backwards moves.
> But I simply can't see a model to fully eliminate them. 
Totally agree with you.

> I.e. my plan was, once the backwards moves are small enough, to maybe
> indeed compensate them by maintaining a global variable tracking
> the most recently returned value. There are issues with such an
> approach too, though: HT effects can result in one hyperthread
> making it just past that check of the global, then hardware
> switching to the other hyperthread, NOW() producing a slightly
> larger value there, and hardware switching back to the first
> hyperthread only after the second one consumed the result of
> NOW(). Dario's use would be unaffected by this aiui, as his NOW()
> invocations are globally serialized through a spinlock, but arbitrary
> NOW() invocations on two hyperthreads can't be made such that
> the invoking party can be guaranteed to see strictly montonic
> values.
> 
> And btw., similar considerations apply for two fully independent
> CPUs, if one runs at a much higher P-state than the other (i.e.
> the faster one could overtake the slower one between the
> montonicity check in NOW() and the callers consuming the returned
> values). So in the end I'm not sure it's worth the performance hit
> such a global montonicity check would incur, and therefore I didn't
> make a respective patch part of this series.
> 

Hm, guests pvclock should have faced similar issues too as their
local stamps for each vcpu diverge. Linux commit 489fb49 ("x86, paravirt: Add a
global synchronization point for pvclock") depicts a fix to similar situations to the
scenarios you just described - which lead to have a global variable to keep track of
most recent timestamp. One important chunk of that commit is pasted below for
convenience:

--
/*
 * Assumption here is that last_value, a global accumulator, always goes
 * forward. If we are less than that, we should not be much smaller.
 * We assume there is an error marging we're inside, and then the correction
 * does not sacrifice accuracy.
 *
 * For reads: global may have changed between test and return,
 * but this means someone else updated poked the clock at a later time.
 * We just need to make sure we are not seeing a backwards event.
 *
 * For updates: last_value = ret is not enough, since two vcpus could be
 * updating at the same time, and one of them could be slightly behind,
 * making the assumption that last_value always go forward fail to hold.
 */
 last = atomic64_read(&last_value);
 do {
     if (ret < last)
         return last;
     last = atomic64_cmpxchg(&last_value, last, ret);
 } while (unlikely(last != ret));
--

Though I am not sure what other options we would have with such a heavy reliance on
TSC right now.

Joao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-06-16 20:26 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-15  9:50 [PATCH 0/8] x86/time: improve cross-CPU clock monotonicity (and more) Jan Beulich
2016-06-15 10:26 ` [PATCH 1/8] " Jan Beulich
2016-06-15 10:32   ` Jan Beulich
2016-06-15 22:51   ` Joao Martins
2016-06-16  8:27     ` Jan Beulich
2016-06-16 20:27       ` Joao Martins [this message]
2016-06-17  7:32         ` Jan Beulich
2016-06-21 12:05           ` Joao Martins
2016-06-21 12:28             ` Jan Beulich
2016-06-21 13:57               ` Joao Martins
2016-08-02 19:30   ` Andrew Cooper
2016-06-15 10:26 ` [PATCH 2/8] x86: also generate assembler usable equates for synthesized features Jan Beulich
2016-06-20 12:50   ` Andrew Cooper
2016-06-15 10:27 ` [PATCH 3/8] x86/time: introduce and use rdtsc_ordered() Jan Beulich
2016-06-20 12:59   ` Andrew Cooper
2016-06-20 13:06     ` Jan Beulich
2016-06-20 13:07       ` Andrew Cooper
2016-07-11 11:39     ` Dario Faggioli
2016-06-15 10:28 ` [PATCH 4/8] x86/time: calibrate TSC against platform timer Jan Beulich
2016-06-20 14:20   ` Andrew Cooper
2016-06-20 15:19     ` Jan Beulich
2016-08-02 19:25       ` Andrew Cooper
2016-08-03  9:32         ` Jan Beulich
2016-06-15 10:28 ` [PATCH 5/8] x86/time: correctly honor late clearing of TSC related feature flags Jan Beulich
2016-06-20 14:32   ` Andrew Cooper
2016-06-20 15:20     ` Jan Beulich
2016-07-04 15:39       ` Andrew Cooper
2016-07-04 15:53         ` Jan Beulich
2016-08-02 19:08           ` Andrew Cooper
2016-08-03  9:43             ` Jan Beulich
2016-08-31 13:42               ` Andrew Cooper
2016-08-31 14:03                 ` Jan Beulich
2016-06-15 10:29 ` [PATCH 6/8] x86/time: support 32-bit wide ACPI PM timer Jan Beulich
2016-07-04 15:40   ` Andrew Cooper
2016-06-15 10:30 ` [PATCH 7/8] x86/time: fold recurring code Jan Beulich
2016-07-04 15:43   ` Andrew Cooper
2016-06-15 10:30 ` [PATCH 8/8] x86/time: group time stamps into a structure Jan Beulich
2016-07-04 15:57   ` Andrew Cooper
2016-07-01  7:44 ` Ping: [PATCH 0/8] x86/time: improve cross-CPU clock monotonicity (and more) Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=57630BA4.30607@oracle.com \
    --to=joao.m.martins@oracle.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dario.faggioli@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.