All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: xen-devel@lists.xenproject.org, roger.pau@citrix.com, wl@xen.org,
	andrew.cooper3@citrix.com
Subject: Re: [Xen-devel] [PATCH] x86/vpt: update last_guest_time with cmpxchg and drop pl_time_lock
Date: Thu, 20 Feb 2020 16:47:06 +0100	[thread overview]
Message-ID: <177eafce-7f19-0792-eac2-62ac7b13feb0@suse.com> (raw)
In-Reply-To: <eb6156eb-6a6d-28f5-c8ec-081f81444b99@citrix.com>

On 20.02.2020 16:37, Igor Druzhinin wrote:
> On 20/02/2020 08:27, Jan Beulich wrote:
>> On 19.02.2020 19:52, Igor Druzhinin wrote:
>>> On 19/02/2020 07:48, Jan Beulich wrote:
>>>> On 20.12.2019 22:39, Igor Druzhinin wrote:
>>>>> @@ -38,24 +37,22 @@ void hvm_init_guest_time(struct domain *d)
>>>>>  uint64_t hvm_get_guest_time_fixed(const struct vcpu *v, uint64_t at_tsc)
>>>>>  {
>>>>>      struct pl_time *pl = v->domain->arch.hvm.pl_time;
>>>>> -    u64 now;
>>>>> +    s_time_t old, new, now = get_s_time_fixed(at_tsc) + pl->stime_offset;
>>>>>  
>>>>>      /* Called from device models shared with PV guests. Be careful. */
>>>>>      ASSERT(is_hvm_vcpu(v));
>>>>>  
>>>>> -    spin_lock(&pl->pl_time_lock);
>>>>> -    now = get_s_time_fixed(at_tsc) + pl->stime_offset;
>>>>> -
>>>>>      if ( !at_tsc )
>>>>>      {
>>>>> -        if ( (int64_t)(now - pl->last_guest_time) > 0 )
>>>>> -            pl->last_guest_time = now;
>>>>> -        else
>>>>> -            now = ++pl->last_guest_time;
>>>>> +        do {
>>>>> +            old = pl->last_guest_time;
>>>>> +            new = now > pl->last_guest_time ? now : old + 1;
>>>>> +        } while ( cmpxchg(&pl->last_guest_time, old, new) != old );
>>>>
>>>> I wonder whether you wouldn't better re-invoke get_s_time() in
>>>> case you need to retry here. See how the function previously
>>>> was called only after the lock was already acquired.
>>>
>>> If there is a concurrent writer, wouldn't it just update pl->last_guest_time
>>> with the new get_s_time() and then we subsequently would just use the new
>>> time on retry?
>>
>> Yes, it would, but the latency until the retry actually occurs
>> is unknown (in particular if Xen itself runs virtualized). I.e.
>> in the at_tsc == 0 case I think the value would better be
>> re-calculated on every iteration.
> 
> Why does it need to be recalculated if a concurrent writer did this
> for us already anyway and (get_s_time_fixed(at_tsc) + pl->stime_offset)
> value is common for all of vCPUs? Yes, it might reduce jitter slightly
> but overall latency could come from any point (especially in case of
> rinning virtualized) and it's important just to preserve invariant that
> the value is monotonic across vCPUs.

I'm afraid I don't follow: If we rely on remote CPUs updating
pl->last_guest_time, then what we'd return is whatever was put
there plus one. Whereas the correct value might be dozens of
clocks further ahead.

>> Anther thing I notice only now are the multiple reads of
>> pl->last_guest_time. Wouldn't you better do
>>
>>         do {
>>             old = ACCESS_ONCE(pl->last_guest_time);
>>             new = now > old ? now : old + 1;
>>         } while ( cmpxchg(&pl->last_guest_time, old, new) != old );
>>
>> ?
> 
> Fair enough, although even reading it multiple times wouldn't cause
> any harm as any inconsistency would be resolved by cmpxchg op.

Afaics "new", if calculated from a value latched _earlier_
than "old", could cause time to actually move backwards. Reads
can be re-ordered, after all.

> I'd
> prefer to make it in a separate commit to unify it with pv_soft_rdtsc().

I'd be fine if you changed pv_soft_rdtsc() first, and then
made the code here match. But I don't think the code should be
introduced in other than its (for the time being) final shape.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2020-02-20 15:47 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-20 21:39 [Xen-devel] [PATCH] x86/vpt: update last_guest_time with cmpxchg and drop pl_time_lock Igor Druzhinin
2020-02-18 17:00 ` Jan Beulich
2020-02-18 17:06   ` Igor Druzhinin
2020-02-19  7:48 ` Jan Beulich
2020-02-19 18:52   ` Igor Druzhinin
2020-02-20  8:27     ` Jan Beulich
2020-02-20 15:37       ` Igor Druzhinin
2020-02-20 15:47         ` Jan Beulich [this message]
2020-02-20 16:08           ` Igor Druzhinin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=177eafce-7f19-0792-eac2-62ac7b13feb0@suse.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=igor.druzhinin@citrix.com \
    --cc=roger.pau@citrix.com \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.