All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wei.liu2@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Xen-devel <xen-devel@lists.xen.org>,
	Julien Grall <julien.grall@arm.com>
Subject: Re: [PATCH] xen/domctl: Drop vcpu_alloc_lock
Date: Fri, 7 Sep 2018 10:57:22 +0100	[thread overview]
Message-ID: <0f24ce6a-ff4a-002c-db82-890bdd401755@citrix.com> (raw)
In-Reply-To: <5B923B4002000078001E6383@prv1-mh.provo.novell.com>

On 07/09/18 09:48, Jan Beulich wrote:
>>>> On 05.09.18 at 21:15, <andrew.cooper3@citrix.com> wrote:
>> Since its introduction in c/s 8cbb5278e "x86/AMD: Add support for AMD's OSVW
>> feature in guests", the OSVW data has been corrected to be per-domain rather
>> than per-vcpu, and is initialised during XEN_DOMCTL_createdomain.
>>
>> Furthermore, because XENPF_microcode_update uses hypercall continuations to
>> move between CPUs, it drops the vcpu_alloc_lock mid update, meaning that it
>> didn't provided the interlock guarantee that the OSVW patch was looking for in
>> the first place.
>>
>> This interlock serves no purpose, so take the opportunity to drop it and
>> remove a global spinlock from the hypervisor.

> I see you've rushed the patch in (perhaps to avoid objections, given
> that you've proposed this removal before, and I didn't really like it),
> so I guess we need to take it from there now. 

There was nothing deliberate here.  TBH, I thought the patch had been
pending on the list for longer than it had.  Either way, as we are
starting the conversation again...

> The interlock didn't work as intended, I agree, but "serves no purpose"
> is wrong imo.

At the moment, I stand my by statement, because as far as I can tell,
the interlock literally does nothing.

> Rather than blindly dropping the logic, I'd have expected
> for it to be fixed: Despite the movement into XEN_DOMCTL_createdomain
> there's still a race between ucode updates and domain creation.

What race?  What have I overlooked?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2018-09-07  9:57 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-05 19:15 [PATCH] xen/domctl: Drop vcpu_alloc_lock Andrew Cooper
2018-09-06  9:50 ` George Dunlap
2018-09-06 10:47 ` Wei Liu
2018-09-07  8:48 ` Jan Beulich
2018-09-07  9:57   ` Andrew Cooper [this message]
2018-09-07 10:21     ` Jan Beulich
2018-09-07 10:59       ` Andrew Cooper
2018-09-07 12:37         ` George Dunlap
2018-09-07 12:44           ` Andrew Cooper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0f24ce6a-ff4a-002c-db82-890bdd401755@citrix.com \
    --to=andrew.cooper3@citrix.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=JBeulich@suse.com \
    --cc=julien.grall@arm.com \
    --cc=konrad.wilk@oracle.com \
    --cc=sstabellini@kernel.org \
    --cc=tim@xen.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.