All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Roger Pau Monné" <roger.pau@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: <keir@xen.org>, <linux-kernel@vger.kernel.org>,
	<konrad@kernel.org>, <david.vrabel@citrix.com>,
	<jbeulich@suse.com>, <xen-devel@lists.xenproject.org>,
	<boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [XEN PATCH 1/2] hvm: Support more than 32 VCPUS when migrating.
Date: Wed, 9 Apr 2014 11:04:58 +0200	[thread overview]
Message-ID: <53450D3A.9090805@citrix.com> (raw)
In-Reply-To: <1397032397.31448.13.camel@kazak.uk.xensource.com>

On 09/04/14 10:33, Ian Campbell wrote:
> On Tue, 2014-04-08 at 14:53 -0400, Konrad Rzeszutek Wilk wrote:
>> On Tue, Apr 08, 2014 at 08:18:48PM +0200, Roger Pau Monné wrote:
>>> On 08/04/14 19:25, konrad@kernel.org wrote:
>>>> From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>>>
>>>> When we migrate an HVM guest, by default our shared_info can
>>>> only hold up to 32 CPUs. As such the hypercall
>>>> VCPUOP_register_vcpu_info was introduced which allowed us to
>>>> setup per-page areas for VCPUs. This means we can boot PVHVM
>>>> guest with more than 32 VCPUs. During migration the per-cpu
>>>> structure is allocated fresh by the hypervisor (vcpu_info_mfn
>>>> is set to INVALID_MFN) so that the newly migrated guest
>>>> can do make the VCPUOP_register_vcpu_info hypercall.
>>>>
>>>> Unfortunatly we end up triggering this condition:
>>>> /* Run this command on yourself or on other offline VCPUS. */
>>>>  if ( (v != current) && !test_bit(_VPF_down, &v->pause_flags) )
>>>>
>>>> which means we are unable to setup the per-cpu VCPU structures
>>>> for running vCPUS. The Linux PV code paths make this work by
>>>> iterating over every vCPU with:
>>>>
>>>>  1) is target CPU up (VCPUOP_is_up hypercall?)
>>>>  2) if yes, then VCPUOP_down to pause it.
>>>>  3) VCPUOP_register_vcpu_info
>>>>  4) if it was down, then VCPUOP_up to bring it back up
>>>>
>>>> But since VCPUOP_down, VCPUOP_is_up, and VCPUOP_up are
>>>> not allowed on HVM guests we can't do this. This patch
>>>> enables this.
>>>
>>> Hmmm, this looks like a very convoluted approach to something that could
>>> be solved more easily IMHO. What we do on FreeBSD is put all vCPUs into
>>> suspension, which means that all vCPUs except vCPU#0 will be in the
>>> cpususpend_handler, see:
>>>
>>> http://svnweb.freebsd.org/base/head/sys/amd64/amd64/mp_machdep.c?revision=263878&view=markup#l1460
>>
>> How do you 'suspend' them? If I remember there is a disadvantage of doing
>> this as you have to bring all the CPUs "offline". That in Linux means using
>> the stop_machine which is pretty big hammer and increases the latency for migration.
> 
> Yes, this is why the ability to have the toolstack save/restore the
> secondary vcpu state was added. It's especially important for
> checkpointing, but it's relevant to regular migrate as a performance
> improvement too.
> 
> It's not just stop-machine, IIRC it's a tonne of udev events relating to
> cpus off/onlinign etc too and all the userspace activity which that
> implies.

Well, what it's done on FreeBSD is nothing like that, it's called the
cpususpend handler, but it's not off-lining CPUs or anything like that,
it just places the CPU in a while loop inside of an IPI handler, so we
can do something like this will all APs:

while (suspended)
 pause();

register_vcpu_info();

So the registration of the vcpu_info area happens just after the CPU is
waken from suspension and before it leaves the IPI handler, and it's the
CPU itself the one that calls VCPUOP_register_vcpu_info (so we can avoid
the gate in Xen that prevents registering the vcpu_info area for CPUs
different that ourself).

Roger.


  parent reply	other threads:[~2014-04-09  9:05 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-04-04 20:44 33 VCPUs in HVM guests with live migration with Linux hangs Konrad Rzeszutek Wilk
2014-04-07  8:32 ` Ian Campbell
2014-04-08 17:25   ` [PATCH] Fixes for more than 32 VCPUs migration for HVM guests (v1) konrad
2014-04-08 17:25     ` [XEN PATCH 1/2] hvm: Support more than 32 VCPUS when migrating konrad
2014-04-08 17:25     ` konrad
2014-04-08 18:18       ` [Xen-devel] " Roger Pau Monné
2014-04-08 18:53         ` Konrad Rzeszutek Wilk
2014-04-09  7:37           ` Roger Pau Monné
2014-04-09 15:34             ` Konrad Rzeszutek Wilk
2014-04-09 15:34             ` [Xen-devel] " Konrad Rzeszutek Wilk
2014-04-09 15:38               ` David Vrabel
2014-04-09 15:55                 ` Konrad Rzeszutek Wilk
2014-04-09 15:55                 ` [Xen-devel] " Konrad Rzeszutek Wilk
2014-04-09 15:38               ` David Vrabel
2014-04-09  7:37           ` Roger Pau Monné
2014-04-09  8:33           ` Ian Campbell
2014-04-09  8:33           ` [Xen-devel] " Ian Campbell
2014-04-09  9:04             ` Roger Pau Monné
2014-04-09  9:04             ` Roger Pau Monné [this message]
2014-04-08 18:53         ` Konrad Rzeszutek Wilk
2014-04-08 18:18       ` Roger Pau Monné
2014-04-09  9:06       ` Jan Beulich
2014-04-09  9:06       ` Jan Beulich
2014-04-09 15:27         ` Konrad Rzeszutek Wilk
2014-04-09 15:36           ` Jan Beulich
2014-04-22 18:34             ` Konrad Rzeszutek Wilk
2014-04-23  8:57               ` Jan Beulich
2014-04-23  8:57               ` Jan Beulich
2014-04-22 18:34             ` Konrad Rzeszutek Wilk
2014-04-09 15:36           ` Jan Beulich
2014-04-09 15:27         ` Konrad Rzeszutek Wilk
2014-04-08 17:25     ` [LINUX PATCH 2/2] xen/pvhvm: Support more than 32 VCPUs " konrad
2014-04-08 17:25     ` konrad
2014-04-09  8:03       ` Jan Beulich
2014-04-09  8:03       ` Jan Beulich
2014-04-08 17:25   ` [PATCH] Fixes for more than 32 VCPUs migration for HVM guests (v1) konrad

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=53450D3A.9090805@citrix.com \
    --to=roger.pau@citrix.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=david.vrabel@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=keir@xen.org \
    --cc=konrad.wilk@oracle.com \
    --cc=konrad@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.